[R-sig-ME] testing for a significant correlation between random slopes

Ben Bolker bbolker at gmail.com
Fri Dec 19 16:15:50 CET 2014


Jaime Ashander <jashander at ...> writes:

> 
> Hi Bill,
> 
> The intuitive approach you propose makes sense to me. I'd think one could
> specify such a model along the same lines as random effects on both slope
> and intercept but with no correlation. In lme4 (see 3.2.2 of Bates's lme4
> book) this would be:
> 
> library(lme4)
> #don't use REML when comparing models
> fit.rgr2_corr <- lmer(log(TotalDM) ~ t + starchR + t:starchR + ( t +
> t:starchR | Sp),
>  data=dianna, REML=FALSE)
> fit.rgr2_nocorr <- lmer(log(TotalDM) ~ t + starchR + t:starchR + ( t | Sp)
> + (0 + t:starchR | Sp),
>    data=dianna, REML=FALSE)
> anova(fit_rgr2_corr, fit.rgr2_nocorr)
> 
> Note this also specifies zero correlation for the interaction term with the
> intercept, but estimates a correlation between t and the intercept. I'm not
> sure of a way to get around 'giving' the intercept to one of these terms
> but maybe someone will chime in with an alternative.
> 
> Cheers,
> 
> - Jaime

  I suppose you could use

(1|Sp) + (0+t|Sp) + (0+t:starchR|Sp)

It's important to distinguish between categorical and continuous
predictors here: if t and starchR are both continuous (as suggested
by the use of the word 'slopes' below), this should work.  If either
is categorical this will be a bit trickier (I believe the ?lmer
page gives an example -- basically, you have to set up your own
dummy variables).

  As far as using REML vs ML for contrasts -- Tove is true that
REML generally gives less-biased estimates of the RE variances,
and it should be OK to do a (restricted) likelihood ratio test
between REML-fitted models that differ only in their random effects.
I don't actually know of any explicit comparisons that explore
the power/type I error of REML vs ML-based tests of this type.

Alternatively,

* You could probably use the nlme package's "pdDiag" class
to set up your own diagonal variance-covariance matrix.
* You could look at the 95% profile confidence intervals on the
correlation (?confint.merMod) and see if they overlap zero or not.


> > Message: 2
> > Date: Thu, 18 Dec 2014 12:37:22 -0500
> > From: "Bill Shipley" <bill.shipley at ...>

[snip]

> >
> > Hello.  I am fitting a 2-level mixed model using the lme() function
> > involving two independent variables (?t? and ?starchR?) in which the
> > intercept, both slopes and the interaction of the two slopes is also
> > random:
> >
> >
> >
> > fit.rgr2<- lme(log(TotalDM)~t+starchR +
> > t:starchR,random=(~t+t:starchR|Sp),data=dianna)

> > The model converges normally without any warning messages.  All of the
> > fixed
> > terms are clearly different from zero. Mmy working hypothesis requires that
> > there also be a negative between?group correlation between the slope of ?t?
> > and the interaction term (i.e. groups whose slope for ?t? is high at low
> > values of ?starchR? have this slope decrease more rapidly as ?starchR?
> > increases).  When I fit the above mixed model using the lme() function, I
> > indeed find a strong negative correlation of -0.867; here is the relevant
> > part of the output from summary:
> >
> > StdDev Corr
> >
> > (Intercept) 1.650783941 (Intr) t
> > t 0.055870605 -0.124
> > t:starchR 0.000309582 -0.340 -0.867
> >
> > Residual 0.337147863
> >
> > However, there are only 20 groups and I know that large absolute
> > correlations between parameters can arise if the model is
> > overparameterized.
> >
> > Question: how can I determine if the value of -0.867 is really different
> > from zero?
> >
> > Intuitively, I would fit another model in which the covariance between the
> > random components of ?t? and ?t:starchR? is constrained to be zero and then
> > compare the two models via their likelihoods, but I don?t know how to fit
> > such a constrained model in either lme() or lmer().
> >
> > Any help or pointers to relevant literature would be appreciated.

 [snip]



More information about the R-sig-mixed-models mailing list