[R-sig-ME] Repeated measures in lmer().

Kingsford Jones kingsfordjones at gmail.com
Fri Apr 3 09:51:15 CEST 2009


On Thu, Apr 2, 2009 at 4:19 PM, Rolf Turner <r.turner at auckland.ac.nz> wrote:

[snip]
>        Can you suggest a sensible recipe or two that I could try to get
> myself
>        started with?
>

Hi Rolf,

Assuming your data is set up with 3 rows per student, with the
response being the change in test score over each of the three time
periods, a nice starter recipe is:

lmer(scoreChange ~ ordered(gap) + (1|school/student), data=yourDat)


Using the default polynomial contrasts for ordered factors, this will
estimate linear and quadratic fixed effects for the gap term (i.e.
does the difference in scores increase or decrease over the 3 time
periods -- linearly or quadratically?), as well as random intercepts
for schools and for students nested within schools.  Here's an
example:

 library(lme4)
 set.seed(777)
 student <- factor(rep(1:15, each=3))
 school <- factor(rep(1:5, each=9))
 scoreChange <- rnorm(45, 2^(1:3), 2) +
                            rnorm(15)[student] +
                            rnorm(5)[school]
 gap = factor(rep(1:3, 15))

 f1 <- lmer(scoreChange ~ ordered(gap) + (1|school/student))
 summary(f1)


In the example there's not enough power to (clearly) pick up on the
positive quadratic trend in the change in changes in scores over time,
but it does provide strong evidence for the linear trend (also, notice
the 0 correlation between the linear and quadratic effects -- nice and
orthogonal).

If I'm thinking about things correctly, the random effects in this
model induce two layers of compound symmetric correlation structures,
where all scores within students share a constant correlation, and all
students within schools share constant correlation.  My logic is just
an extension of the case where there is a single random grouping
factor, and y_i represents the observations in group i.  Then Var(y_i)
 = Z_i sigma(2)^2 Z'_i + sigma(1)^2 I, where Z_i is the random effects
design matrix (just a column of 1's, but it could contain covariates
or factors for random slopes), sigma(2)^2 is the group to group
variance, sigma(1)^2 is error variance, and I is the identity matrix.
So, Var(y_i) has sigma(2)^2 off-diagonal and the sum of the two
variances on the diagonal ==> compound symmetry within a group.  Then,
combining all observations, Var(y) is block diagonal with the compound
symmetric group matrices along the diagonal and 0's off-diagonal (i.e.
observations between groups are independent).

Hopefully some of that is helpful,

Kingsford Jones


>        I'm unclear as to the covariance structure induced or assumed in the
> polynomial
>        models that you have fitted to the Oxboys data.  There are, for each
> boy,
>        9 observations of the boy's height, at various ages.  If we let the
> heights for
>        a particular boy be (H_1,...,H_9) what can we say --- or what are we
> assuming ---
>        about, e.g., Cov(H_3,H_7)?  Is this expressed as some function of
> (age_3 - age_7)
>        for that boy?  Or do these covariances not come into the picture at
> all?
>
>        Grateful as always for enlightenment.
>
>                cheers,
>
>                        Rolf
>
> P. S.  If anyone wants to have a go at analyzing the real data ..... :-)
>
> ######################################################################
> Attention:\ This e-mail message is privileged and confid...{{dropped:9}}
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>




More information about the R-sig-mixed-models mailing list