[R] lmer model building--include random effects?
Douglas Bates
bates at stat.wisc.edu
Wed Apr 23 14:56:45 CEST 2008
On 4/22/08, Ista Zahn <istazahn at gmail.com> wrote:
> Hello,
> This is a follow up question to my previous one http://tolstoy.newcastle.edu.au/R/e4/help/08/02/3600.html
> I am attempting to model relationship satisfaction (MAT) scores
> (measurements at 5 time points), using participant (spouseID) and
> couple id (ID) as grouping variables, and time (years) and conflict
> (MCI.c) as predictors. I have been instructed to include random
> effects for the slopes of both predictors as well as the intercepts,
> and then to drop non-significant random effects from the model. The
> instructor and the rest of the class is using HLM 6.0, which gives p-
> values for random effects, and the procedure is simply to run a model,
> note which random effects are not significant, and drop them from the
> model. I was hoping I could to something analogous by using the anova
> function to compare models with and without a particular random
> effect, but I get dramatically different results than those obtained
> with HLM 6.0.
> For example, I wanted to determine if I should include a random effect
> for the variable "MCI.c" (at the couple level), so I created two
> models, one with and one without, and compared them:
> > m.3 <- lmer(MAT ~ 1 + years + MCI.c + (1 + years | spouseID) + (1 +
> years + MCI.c | ID), data=Data, method = "ML")
> > m.1 <- lmer(MAT ~ 1 + years + MCI.c + (1 + years + MCI.c |
> spouseID) + (1 + years + MCI.c | ID), data=Data, method = "ML")
> > anova(m.1, m.3)
> Data: Data
> Models:
> m.3: MAT ~ 1 + years + MCI.c + (1 + years | spouseID) + (1 + years +
> m.1: MCI.c | ID)
> m.3: MAT ~ 1 + years + MCI.c + (1 + years + MCI.c | spouseID) + (1 +
> m.1: years + MCI.c | ID)
> Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
> m.3 12 5777.8 5832.7 -2876.9
> m.1 15 5780.9 5849.5 -2875.4 2.9428 3 0.4005
> The corresponding output from HLM 6.0 reads
> Random Effect Standard Variance df Chi- square P-value
> Deviation Component
> ------------------------------------------------------------------------------
> INTRCPT1, R0 6.80961 46.37075 60 112.80914 0.000
> YEARS slope, R1 1.49329 2.22991 60 59.38729 >.500
> MCI slope, R2 5.45608 29.76881 60 90.57615 0.007
> ------------------------------------------------------------------------------
> To me, this seems to indicate that HLM 6.0 is suggesting that the
> random effect should be included in the model, while R is suggesting
> that it need not be. This is not (quite) a "why do I get different
> results with X" post, but rather an "I'm worried that I might be doing
> something wrong" post. Does what I've done look reasonable? Is there a
> better way to go about it
The first thing I would try to determine is whether the model fit by
HLM is equivalent to the model you have fit with lmer. The part of
the HLM model output you have shown lists only variance components.
It does not provide covariances or correlations. The lmer model is
fitting a 3 by 3 symmetric positive definite variance-covariance
matrix with a total of 6 parameters - 3 variances and 3 covariances.
It may be that HLM is fitting a simpler model in which the covariances
are all zero.
The next question would be exactly how HLM is calculating that
p-value. I wonder where the 60 degrees of freedom comes from. Do you
happen to have 60 couples in the study?
More information about the R-help
mailing list