[R-sig-ME] LMM covariance structure

Kingsford Jones kingsfordjones at gmail.com
Sun Mar 8 21:55:42 CET 2009

Hi Pietro

First I'll second the nomination for Pinheiro and Bates as the best
resource for fitting LMMs in R/S.  Although there are theoretical
chapters that require a good understanding of matrix algebra, most of
the book consists of clear examples with code and graphics.  Of
course, as you seem to be aware, fitting mixed models without
understanding some theory can be hazardous.

Some more comments below...

On Sat, Mar 7, 2009 at 8:14 AM, Pietro Ravani <pravani at ucalgary.ca> wrote:
> Dear Doug
> I sent my reply twice (* copied below) but I cannot see it
> Can you pls check?
> Thank you
> Pietro
> (*)
> Yes, I was referring to the conditional variance-covariance structure
> of the response given the random effects, which is referred to as R in
> the book of West, Welch, Galecki that I found mathematically
> affordable (I am studying the R language now, and trying to learn more
> about correlated data for study design purposes).  And yes, I meant
> "non-zero", as stated in the erratum.

I didn't notice the erratum before my last response -- the 'non-zero'
question is more clear.  The "conditional variance-covariance
structure of the response given the random effects" is complicated by
the structure of the random effects (i.e. Var(y_i) = Z_i \Psi Z'_i  +
\Sigma, where y_i is the vector of responses for the i^th subject,
Z_i is the random effects design matrix for the i^th subject, \Psi is
the random effects covariance matrix, and \Sigma is the within-subject
error covariance matrix).  So, to keep things "simple" I'll focus on
the within-subject error covariance, which can be decomposed into
\sigma^2 VCV, where V is diagonal with possibly non-constant error
standard deviations on the diagonal (this is structured by the
'weights' argument to modeling functions in the nlme package), and C
contains the within-subject correlation structure (1's on the diagonal
and off-diagonal structured by the parameters associated with one of
the stuctures seen in ?corClasses, or one supplied by the user).  So,
the 'correlation' argument to nlme modeling functions provides a tool
for structuring the non-zero off-diagonals that you were asking about.

A couple things about non-zero off-diagonals to note:

i) IIRC, the off-diagonal structure does not have to be described
within subjects (e.g. if you had observations in space you might have
correlation = corExp(form=~lat + lon), OR you could fit the
exponential spatial structure within, e.g., states, with correlation =
corExp(form=~lat + lon|state).

ii) even without error covariance structure, the response
off-diagonals are non-zero when there are random effects.  For
example, if there is a subject random intercept \sigma_b^2 and within
subject errors are assumed independent (i.e. \Sigma = \sigma^2 I, then
Var(y_i) contains \sigma_b^2 + \sigma^2 on the diagonal and \sigma_b^2
on the off-diagonal.  Thus a compound symmetric correlation structure
has been induced, where within subjects observations are assumed to
have a constant correlation \rho = \sigma_b^2 / (\sigma_b^2 +

> Looking at the output of the getVarCov() function in R, I see that
> choosing "conditional" as "type" I obtain a matrix with the estimate
> of the error "variance" on the diagonal (which can be heterogeneous,
> i.e. vary within cluster/group by values of cluster level co-variates)
> and all "zeros" off the diagonal.  I thought this is what LMM do:
> explaining the group heterogeneity in the data (and the resulting
> correlation in the responses) through splitting the random portion of
> the statistical model into two layers, the random effects and the
> random errors.  These random errors - conditioning on the random
> effects - I thought were normally distributed with zero mean and some
> variance sigma2 (on the diagonal of the R matrix) and independent
> (thus with zero co-variances off the diagonal of the R matrix).

As explained above, the "weights" argument frees you from the
restriction of \sigma^2 on the diagonal, and the "correlation"
argument frees you from independence conditional on the level of the
random effect.

> Typing "marginal" in the above cmd tells R to give me what I thought
> it was the combination (marginal model implied by the LMM) of the 2
> VCV matrices, D (matrix of the random effects parameters) and R
> (matrix of the random error parameter).  The fact that different
> structures (of the R matrix?) - mentioned in the previous emails
> (compound symmetry, AR 1, Toeplitz, etc) - can be specified in the
> lme() function via the correlation argument confuses me, unless they
> refer to the resulting marginal model matrix (not the R matrix
> conditional on the random effects).  I have the impression I am lost
> (although I know I have much more to learn).
> Directions re math friendly sources / learning tools (especially using
> R) would be very appreciated of course

A few more R/S resources: Julian Faraway's Extending the Linear Model
with R, Venables and Ripley's MASS, the mixed-models appendix to John
Fox's Companion to Applied Regression, and many documents that show up
if you google:

mixed OR multilevel lme OR nlme OR lmer filetype:pdf

hope that helps,

Kingsford Jones

>        [[alternative HTML version deleted]]
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models

More information about the R-sig-mixed-models mailing list