[R-sig-ME] lme: random effects for replicated growth curves
Ben Bolker
bbolker at gmail.com
Tue Aug 20 16:53:59 CEST 2013
[cc'ing back to r-sig-mixed-models ]
I'm still not 100% sure what the problem is, and I don't know if I
have time to dig through in detail, but here's an example that _might_
help -- if you want to use a grouped correlation structure without
incorporating random effects ...
d <- data.frame(x=1:1000,
f=rep(1:50,each=20),
y=rnorm(1000),time=rep(1:20,50))
lme(y~x,correlation=corAR1(form=~time|f),random=~1|f,data=d)
gls(y~x,correlation=corAR1(form=~time|f),data=d) ## without RE
On 13-08-20 07:48 AM, Adrien Combaz wrote:
> Hi Ben, I realize that my question was quite long and I appreciate
> that you took the time to go through it and answer it.
>
> You were right about my problem with the variance function, when
> observing the residuals of the new model, I should explicitly
> mentioned type="pearson" or type="normalized" (both giving the same
> result). I should have seen it on Pinheiro and Bates book (pp. 218).
> I had a slightly outdated version of the nlme package where the help
> of the residuals.lme function mentioned that the "pearson" was the
> default type. This solved my heteroscedasticity issue.
>
> However I still have a correlation problem. My model is now
>
> lm0 <- lme( values ~ time*condition, data = dataGroup, random =
> ~time|subject/condition, weights = varPower(form = ~time), control =
> list(opt="optim"), method = "REML")
>
> The model gives a straight line along time for each subject and
> condition that is supposed to represent all 12 trials for this
> subject/condition. Now it appears that a trial deviating from the
> model at time t will keep on deviating at time t+1 (this is related
> the increase of variability in time that I mention in my initial
> question). The variance function mentioned earlier corrects the
> residuals in a way to keep this variation constant along time, but it
> still have to account somehow for the correlation along time of the
> residuals within each trials.
>
> What I would like, is to be able to specify that the correlation
> structure is along time within each trial independently, without
> having to specifity the trials as a random effect of my model. This
> the same issue as posted here
> https://stat.ethz.ch/pipermail/r-sig-mixed-models/2010q1/003311.html,
> unfortunately no answer was provided.
>
> So I am still stuck - for the moment - with my other model, which
> specify a random slope and intercept for each subject and for each
> trial within subject.
>
> lm1 <- lme( values ~ time*condition, data = dataGroup, random =
> ~time|subject/trialInSub, control = list(opt="optim"), method =
> "REML").
>
> Unfortunately, as mentioned in the original question, this model
> leads also to non-normal and correlated residuals. I could not
> correct for this using variance and/or correlation structures. I am
> not sure I can do much more with this type of model, and I am
> actually wondering if those patterns observed in the residuals could
> point out to the fact that I should use a non-linear model.
>
>
> Adrien Combaz
>
> -----Original Message----- From:
> r-sig-mixed-models-bounces at r-project.org
> [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of Ben
> Bolker Sent: Monday, August 19, 2013 9:39 PM To:
> r-sig-mixed-models at r-project.org Subject: Re: [R-sig-ME] lme: random
> effects for replicated growth curves
>
> Adrien Combaz <Adrien.Combaz at ...> writes:
>
>>
>> Dear nlme users,
>
>> I am measuring the evolution of the brain response to a visual
>> stimulation over time. The measures are done every seconds from 1
>> second to 14 seconds (each measure at time t gives a value
>> summarizing the magnitude of the brain response from time 0 to time
>> t). I have 8 subjects (S1, ..., S8) and 2 experimental conditions
>> (C0, C1). For each subject and condition I replicate the
>> measurement 12 times. I obtain therefore for each subject and
>> condition 12 growth curves.
>
> Part of the reason this question may not have gotten the attention it
> deserves is that it's rather long. I know I often have the reaction
> "hmm, that looks complicated, I don't have time to look at it right
> now" to long/involved questions. It is certainly sensible that you
> want to explain the context of your problem and what you've already
> tried, but it may be that some problems are just a bit too involved
> to be easily tractable in a mailing list/forum format.
>
>
> [snip]
>
> An increase in variability over time would seem to be more sensibly
> modeled by your second attempt (varStruct/heteroscedasticity) than by
> your first (temporal autocorrelation); sometimes it can also be
> handled by transformation (although that is likely to affect/mess up
> linearity to some extent at the same time)
>
>> As I didn't manage to use correlation structures, I tried to use
>> variance functions to model heteroscedasticity: lm1 <- update(lm0,
>> weights = varPower(form = ~ stimDuration)) But when looking at the
>> residuals, it still displayed an increased variance over time.
>> Same thing happened when using "varExp" or "varConstPower".
>
> Did you make sure to use residuals(.,type="pearson") ? Otherwise, the
> heteroscedasticity may be correctly modeled but the (raw) residuals
> won't reflect that the model has taken the heteroscedasticity into
> account (see also type="normalized", for models with correlation
> structure: more generally, ?residuals.lme
>
>
> Good luck Ben Bolker
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list