[R-sig-ME] Always allow for correlation of random effects?

Ben Bolker bbolker at gmail.com
Tue Dec 3 22:10:07 CET 2013


AvianResearchDivision <segerfan83 at ...> writes:

> 
> Hi all,
> 
> I am working with mixed models and have the following general model
> structure:
> 
> model<-lmer(X~Y*Z+(A|B), where Y is a continuous mean centered
> environmental variable, Z is Year (2012 or 2013).
> 
> Part of my interest in my study is to explore variation in plasticity
> between individuals, not just population level plasticity.  When settling
> on a final model, I first check for significance of random 
> effects and then
> I worry about the fixed effects structure.  For the random effects
> structure, even in there is not significant variation between individuals
> in their slope, I leave this term in the model because it is a primary
> interest of mine.  My question is, if I am leaving random slopes in all of
> final models along with random intercepts, should I also allow for the
> correlation of the random effects, i.e. (A|B) or should I not allow for
> this, i.e. (A+0|B) if there is no significant correlation when checked LRT
> fitted by REML?
> 
> I initially thought that I would remove the ability for correlated random
> effects if there was no significant correlation, but then I read the
> following: "In models with both individual-specific 
> elevations and slopes we
> allowed for the potential correlation between these, to ensure that BLUP
> estimates produced by the models were not affected by the method used to
> centre covariates." from 'Phenotypic plasticity in a maternal trait in red
> deer" by Nussey et al. 2005.  Do you understand their reasoning?

  Very briefly (if Gmane lets me): if the correlation between slopes
and intercepts is suppressed, then you can change the results by
linear transformation of the 'A' variable (you didn't tell us what
it was -- it might be helpful); for example, if 'A' is a continuous
predictor, then centering it will change the answers.  If 'A' is
a categorical predictor, then changing the contrasts (e.g. from the
default treatment contrasts to sum-to-zero contrasts) will change the
answers.  If you allow for the correlations, then the results will be
invariant to linear transformations/combinations of the 'A' variable.
Rune Haubo has some nice little examples of this phenomenon suggesting
that the common practice of dropping the correlations for parsimony
can have misleading effects -- I don't know if they're publicly available
anywhere.

  By the way,
by dropping the correlations do you mean the difference between

(1|B) + (0+A|B)

and

(A|B)

?  This works for a continuous variable, but *not* for a categorical
variable -- at present if you want to do this for a categorical
variable you need to generate your own dummy variables (e.g. see
https://github.com/lme4/lme4/issues/139 )



More information about the R-sig-mixed-models mailing list