[R-sig-ME] Testing for significant intercept-slope correlation in nlme

David Atkins datkins at u.washington.edu
Fri Aug 10 09:27:43 CEST 2012

Hi Kevin--

I have not used lme() too much recently, but I think you would want to 
use the pdDiag() function for creating a diagonal random-effects matrix. 
  For instance, using the Orthodont datset:


fm1 <- lme(distance ~ age,
            data = Orthodont,
            random = ~ 1 | Subject)
fm1.1 <- lme(distance ~ age,
            data = Orthodont,
            random = ~ -1 + age | Subject)
fm1.2 <- lme(distance ~ age,
            data = Orthodont,
            random = list(Subject = pdDiag(~ age)))
fm1.3 <- lme(distance ~ age,
              data = Orthodont,
              random = ~ age | Subject)
anova(fm1, fm1.1, fm1.2, fm1.3)

summary(fm1.2, corr = FALSE)

So, fm1.2 fits intercept and random slope for age but no covariance 
(correlation) between the two.

[BTW, you might check whether the autocorrelation term seems to matter 
in any substantive way for your results.  Pretty much every time I have 
compared with and without such terms (using daily diary data), models 
with autocorrelation terms are hugely preferred by statistical indices, 
but the parameters that I care about are virtually identical. 
Oftentimes, these terms are "fighting" with the random-effects for 
explaining the correlation due to repeated-measures, but fixed-effects 
and their SE are largely similar.]

Hope that helps.

cheers, Dave

Dave Atkins, PhD
University of Washington
datkins at u.washington.edu

August 1 - October 30:

Universitat Zurich
Psychologisches Institut
Klinische Psychologie
Binzmuhlestrasse 14/23
CH-8050 Zurich

+41 44 635 71 75

I'm working on a study that is predicting daily urges to drink alcohol
based on daily marital satisfaction.  My model has daily observations
nested within participants, and includes random intercepts (i.e.,
individuals may vary in how many urges they experience) and random
slopes for marital satisfaction (i.e., individuals may vary in how
much their marital satisfaction predicts their urges to drink).

I want to test whether the random intercept-slope correlation is
statistically significant using a chi-square test with nested model
comparison, but I'm having trouble specifying the random effects to do

I can create a model with only random slopes and compare that against
a model with random intercepts, random slopes, and intercept-slope
correlation, but doing a nested model comparison combines the
significance test of the random slope effect and the slope-intercept
correlation into one test of significance.  Ideally, I would like to
test the significance of the slope variance and the intercept-slope
correlation separately.

Using nlme (which I selected over lme4 because it allows for temporal
autocorrelation effects), I can specify

random=~1|IDNUM,  #Random intercepts
random=~1 + PREV_URGE_CTRD|IDNUM, #Random intercepts, slopes, and
intercept-slope correlation

But I cannot figure out how to specify random intercepts and slopes
but NO intercept-slope correlation, e.g.,
random= ~(1|IDNUM) + (0 + PREV_URGE_CTRD|IDNUM),  #produces an error message

1.  Does anyone know how to specify random intercepts and random
slopes but suppress the intercept-slope correlation using nlme?  I'd
like to stick with the nlme package if possible.
2.  If that is not possible, does anyone know of a good way (or
references) to test the significance of the slopes and the
significance of the intercept-slope correlation when a nested model
comparison changes both of those random effects simultaneously?

An example of the full model (which includes other fixed effects not
described above) is here:
	correlation=corAR1(0, form = ~SESSION_DAY|IDNUM),


More information about the R-sig-mixed-models mailing list