# [R-sig-ME] Estimating the correlation between random slopes and intercepts

Nick Isaac njbisaac at googlemail.com
Tue Dec 15 19:00:14 CET 2009

```Dear all,

I wonder if anyone can help me estimate the correlation between random
slopes and intercepts without autocorrelation problems?

I've followed the advice in Reinhold Kliegl's recent paper (in Visual
Cognition) and performed simple model comparison between with a model
lacking the correlation. Both fixed effects in Kliegl et al's study
were categorical, so the autocorrelation issue doesn't arise. I've
also followed standard procedure in centering my fixed effect on zero.
My data are on species body mass, M, and metabolic rate, Q. The
grouping factor is taxonomic identity:

mlm <- mean(log(M))
x <- log(M) - mlm
m1<- lmer( log(Q) ~ x + (x | Taxon) )
m0<- lmer( log(Q) ~ x + (x+0 | Taxon) + (1 | Taxon))

My problem is that the total range of the log(M) is much larger than
in any one of the Taxa. This means that the intercept random effect is
estimated at a value of log(M) well outside range for most groups
(i.e. it's a fitted value of log(Q) extrapolated to the point where
x=0). I thought that centering the data on zero would mean that the
number of taxa extrapolated up would balance the number extrapolated
down. Unfortunately, the data are so unbalanced that the overall mean
log(M) is much less than the mean of the group-level means. This means
that centering the species data on zero (as above) means that many
more groups are extrapolated down than up, which might cause my
correlation to be more negative than it truly is. I've tried different
values of log(M) around which to centre the data (see example below):
my estimate of the correlation and strength of inference depends on
the value of log(M) at which I centre the data.

The bottom line is that I have a manuscript in which this correlation
is quite important. A reviewer has raised the autocorrelation problem
and made suggestions that have none of the strengths of mixed models
(including extracting the taxon random effects and doing regressions
on residuals from a regression of these). I can bang the drum for
mixed models and explain Reinhold Kliegl's simulations, but this won't
overcome the autocorrelation problem.

I've included a simple example to show how I've explored the problem:

s<-summary(sleepstudy\$Days)
sm1<-lapply(s, function(x) lmer(Reaction~ I(Days-x) + (I(Days-x) |
Subject), sleepstudy)) #correlated random effects
sm0<-lapply(s, function(x) lmer(Reaction~ I(Days-x) + (I(Days-x)+0 |
Subject)+(1|Subject), sleepstudy)) #uncorrelated
correl <- sapply(sm1, function(x)
attr(VarCorr(x)\$Subject,"correlation")[1,2]) #extract the correlation
a.var <- sapply(sm1, function(x) VarCorr(x)\$Subject[1,1]) #extract the
variance in the intercept
delta <- sapply(1:length(s), function(i) AIC(logLik(sm0[[i]])) -
AIC(logLik(sm1[[i]]))) #compare the model AIC
cbind(s, correl, a.var, delta)

Any suggestions would be gratefully received.

Best wishes, Nick

Nick Isaac
Macroecologist
Centre for Ecology & Hydrology
Wallingford
Oxon, UK

http://www.ceh.ac.uk/StaffWebPages/DrNickIsaac.html

```