[R-sig-ME] Including random effects creates structure in the residuals
Pierre de Villemereuil
pierre.de.villemereuil at mailoo.org
Tue Feb 27 14:06:20 CET 2018
Hi Paul,
Thank you for your response and interest in my question. While I agree that regression to the mean does exist in these settings, I don't get why this should yield such a correlation between the BLUPs and the residuals (after all, assuming the two are totally independent, you'd still get the same phenomenon you're describing, wouldn't you?). Could you explain why this should be the case? Maybe I'm missing a big point in your explanation, if so, please forgive me.
It got me thinking however, that the correlation between the BLUPs and the residuals could arise from a fundamental constraint in the data as you suggested and I think I now understand what is going on (again, if this is what you suggested, please forgive me as I might have misunderstood your point). A short summary is that it arises from an unbalanced design in the repeated measures (as some individuals do not come back to complete the study).
This can be seen in the following graph, which shows the residuals (e) against the BLUPs (u, which also contains the effect of "visit", but it doesn't impact much the trend here), depending on whether we have 1, 2, 3 or 4 repeated measures for that individual:
https://ibb.co/dDgF3H
It should be expected that there is a perfect linear covariation for only 1 visit, because the BLUP and the residual are basically non identifiable, while this constraint is fading as more repeated measures are added to the data. Does this interpretation makes sense to you?
Thank you for your help! Also the bit about checking residuals in GLMMs, very much interesting, I'll think about DARHMa next time I'll have to do this for a GLMM!
Cheers,
Pierre
Le mardi 27 février 2018, 12:03:17 CET Paul Johnson a écrit :
> Hi Pierre,
>
> I don’t think there is a problem with the residuals. Just to check, the problem you see is that there’s a linear trend in the residuals vs fitted values plot when the ID random effect is included (which in a standard OLS LM would be impossible).
>
> The reason for the correlation is that the fitted values contain the ID random effects, and these are inevitably correlated with the residuals. My intuitive understanding of this is as follows. Say some students sit a test twice, on two separate days. A student's score on a given day will be a combination of their ability (ID random effect) and unmeasured (i.e. noise) factors, like how the student was feeling on that day. Assuming that both ability and luck contribute substantially to the scores, it’s inevitable that the extreme upper end of the distribution will be populated by scores from students who are both able (high ID random effect) and were lucky on that day (high error residual). The same goes in the negative direction for the lower end of the distribution. This the basis of is regression to the mean - if we pick a student with an extreme score and re-test them, we expect their score to be less extreme. If I remember correctly it’s fairly straightforward to predict the correlation of the residuals and fitted values for a given model.
>
> On the broader topic of checking residuals from GLMMs…
> I wrote a simple function to check residuals from lme4 fits by simulating residuals from the fitted model and plotting them on top of the real residuals. If they look similar on several simulated data sets them I’m reassured that the model fits well. This is particularly useful for non-normal GLMMs where (despite popular belief) there's no assumption of normality of the Pearson residuals.
>
> library(devtools)
> install_github("pcdjohnson/GLMMmisc")
> library(GLMMmisc)
> library(lme4)
> fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
> sim.residplot(fm1)
> # note the correlation between the residuals and the fitted values
>
> Florian Hartig has written a more sophisticated package that uses the same basic idea called DHARMa:
> https://cran.r-project.org/web/packages/DHARMa/index.html
> His blog post:
> https://theoreticalecology.wordpress.com/2016/08/28/dharma-an-r-package-for-residual-diagnostics-of-glmms/
>
> All the best,
> Paul
>
>
> > On 27 Feb 2018, at 08:53, Pierre de Villemereuil <pierre.de.villemereuil at mailoo.org> wrote:
> >
> > Dear all,
> >
> > I have an issue that I can't get my head around. I am working on a human cohort dataset studying heart rate. We have repeated measures at several time points and a model with different slopes according to binned age categories (the variable called "broken" hereafter, for "broken lines").
> >
> > My issue is that when I include an individual ID effect (to account for the repeated measures), I obtain structured residuals while this is not the case for a model without this effect.
> >
> > Here are my models:
> > mod_withID <- lmer(cardfreq ~ sex +
> > broken +
> > age:broken +
> > betabloq +
> > cafethe +
> > tabac +
> > alcool +
> > (1|visite) +
> > (1|id),
> > data = sub)
> > mod_noID <- lmer(cardfreq ~ sex +
> > broken +
> > age:broken +
> > betabloq +
> > cafethe +
> > tabac +
> > alcool +
> > (1|visite),
> > data = sub)
> >
> > The AIC (computed with a fit with REML = FALSE) clearly favours the model including the ID effect:
> > AIC(mod_withID)
> > 75184.51
> > AIC(mod_noID)
> > 76942.09
> >
> > Yet, the model including the ID effect suffers from a bad fit from the residuals point of view (structured residuals) as the plots below show:
> > - The residuals with the ID effect:
> > https://ibb.co/b6WsFx
> > - The residuals without the ID effect:
> > https://ibb.co/fFVDNc
> >
> > From this, I gather that the fixed effect part is good enough to provide a good fit, but there is a covariance from the residuals and the BLUPs from the ID effect (I've checked this). Especially, if we marginalise on the random effects to compute the residuals, then everything is fine, suggesting the issue lies in the random rather than the fixed part.
> >
> > I'm a bit puzzled by this. Why would adding an individual effect would create such a structure in the residual part? Why does this covariance between the individual BLUPs and the residual arise?
> >
> > I'd happily take anyone's input on this as I'm at a loss regarding what to do to solve this.
> >
> > Cheers,
> > Pierre
> >
> > _______________________________________________
> > R-sig-mixed-models at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
>
More information about the R-sig-mixed-models
mailing list