[R-sig-ME] using lmer() to estimate intraclass correlation coefficients (ICCs)
David Afshartous
david.r.afshartous at Vanderbilt.Edu
Wed Jun 8 15:45:41 CEST 2011
Thomas,
If I understand your question correctly, you want to stratify both the
random effects variance and the residual error variance. This topic
has been discussed on this list in several different threads, here is
one summary:
https://stat.ethz.ch/pipermail/r-sig-mixed-models/2007q3/000248.html
Basically, the way to specify a random effects stratification in lmer
and nlme is:
(Pind and Dind are indicator variables for the respective groups that
have different random effects variance)
lmer:
fm.het <- lmer( dv ~ time.num*drug + ( 0 + Dind | Patient ) + ( 0 + Pind
| Patient ), data=dat.new ) )
lme:
fm.het <- lme( dv ~ time.num*drug, random = list(~ 0 + Pind |
Patient.new, ~ 0 + Dind | Patient.new), data=dat.new )
If it was a crossover study and you wanted to have a covariance between
the two groups you would have the following changes:
lmer:
(0 + Dind + Pind | Patient )
lme:
random = list(~ 0 + Pind + Dind | Patient.new)
To stratify the residual error variance as well:
lmer:
If I recall correctly, not possible in lme
lme:
weights = varIdent(form = ~ 1 | treatment.ind) )
Cheers,
David
--
David Afshartous, Ph.D.
Research Associate Professor
School of Medicine
Department of Biostatistics
Vanderbilt University
On 06/06/2011 05:38 AM, Thomas Zumbrunn wrote:
> Dear all
>
> I'm using lmer() to obtain variance component estimates for the estimation of
> intraclass correlation coefficients (ICCs). However, I'm struggling with the
> specification of a proper model.
>
> I'd like to illustrate the problem with an example. There are n = 30 subjects.
> For each subject, k = 3 ratings are done with each of two different methods,
> say A and B. With method B, the mean of the ratings per subject are the same,
> but the variance of the ratings per subject is much lower. An artificial data
> set reflecting this could look as follows:
>
> n<- 30
> k<- 3
> dat<- data.frame(subject = factor(rep(1:n, k * 2)),
> method = factor(rep(c("A", "B"), each = n * k)))
> set.seed(123)
> ratingsA<- rnorm(k * n)
> ratingsB<- rep(apply(matrix(ratingsA, ncol = n), 1, mean), 3) + rnorm(k * n,
> sd = 0.1)
> dat$rating<- c(ratingsA, ratingsB)
>
> A dot plot of the ratings:
>
> library(lattice)
> dotplot(rating ~ method | subject, dat)
>
> Now, if I want to get an estimate of the ICC for method A, I could fit a
> random effects model for that part of the data set. The ICC is defined as the
> ratio of the subject-to-subject variance to the total variance (I assume there
> are no rater effects):
>
> library(lme4)
> summary(modA<- lmer(rating ~ (1 | subject), dat, subset = method == "A"))
> (ICCA<- 0.00000 / (0.00000 + 0.79631))
>
> This is what I expected since the ratings were drawn randomly from a standard
> normal distribution. Similarly, for method B, the random effects model would
> look like this:
>
> summary(modB<- lmer(rating ~ (1 | subject), dat, subset = method == "B"))
> (ICCB<- 0.0218857 / (0.0218857 + 0.0097336))
>
> The ICC is about 0.7, i.e. there is an appreciable intraclass correlation.
>
> My question is: If I want to accommodate for the fact that the 2 * 3 = 6
> ratings per subject are not independent, how could I use lmer() to specifiy a
> model for the full data set in order to obtain separate variance components
> for both the subject-to-subject variance and the residual variance for each of
> the two methods (so that I can get estimates for the ICCs)?
>
> Any hints are appreciated.
>
> Best wishes
> Thomas Zumbrunn
>
>
More information about the R-sig-mixed-models
mailing list