[R-sig-ME] Proper analysis for the Machines dataset in lme4

Reinhold Kliegl reinhold.kliegl at gmail.com
Mon Apr 28 08:38:17 CEST 2008


Dear Michael,

I just want to prevent a misreading of some of your comments. As far
as m1 vs mr1 and m2 vs. m2r are concerned, lme and lmer produce
identical estimates for fixed and random effects. The anova(m1, m2)
and anova(m1r, m2r)  also produce identical results, if you use
method="ML".  Crossed-random effects are not easily specified in lme,
but in principle they can. So it is not correct to say that they clash
on m3 and m3r either.

The comparison of m1 and m2 (or m1r and m2r) is conceptually
questionable. m1 assumes there are 6 Workers; m2 assumes that there
are 18 Workers, that is different groups of 6 persons worked on each
of the 3 machines. Presumably, the experimental design decides whether
m1 (m1r) or m2 (m2r) is the correct choice.
     There is a meaningful comparison between m3r and m1r. If m3r is
significantly better it means that the six workers differ reliably not
only in mean performance (intercept) but also in the size of the
machine effect (i.e., there is reliable variance in Machine effects
between Worker). They actually do:

> anova(mr1, mr3)
Data: mach
Models:
mr1: score ~ Machine + (1 | Worker)
mr3: score ~ Machine + (Machine | Worker)
      Df     AIC     BIC  logLik  Chisq Chi Df Pr(>Chisq)
mr1.p  5  303.70  313.65 -146.85
mr3.p 10  236.42  256.31 -108.21 77.285      5    3.1e-15 ***

Thus, if m1 is the correct design, then m3 is an improvement. If m2 is
the correct design, then that is it.

The Baayen example did not involve a comparison between lme and lmer,
as far as I could see.

I do not know much about gmodels. So I leave this part to somebody else.

Best
Reinhold


On Sun, Apr 27, 2008 at 4:49 PM, Michael Kubovy <kubovy at virginia.edu> wrote:
> Dear Mixed Ones,
>
>  While waiting for Doug Bates's book, I'm trying trying to understand
>  the differences between the recommendations of Pinheiro & Bates using
>  the nlme packages, and the use of lmer in the lme4 package. I've
>  gotten myself confused.
>
>  #because lme4 and nlme sometimes clash:
>  detach('package:lme4')
>  require(nlme)
>  data(Machines)
>  mach <- Machines
>  # Machines and Workers are balanced
>  with(mach, table(Machine, Worker))
>  # Here is how it's analyzed in Pinheiro & Bates
>  m1 <- lme(score ~ Machine, data = mach, random = ~ 1 | Worker)
>  m2 <- update(m1, random = ~ 1 | Worker / Machine)
>  anova(m1, m2)
>  # Back to lme4
>  detach('package:nlme')
>  require(lme4)
>  mr1 <-  lmer(score ~ Machine + (1 | Worker), data = mach)
>  mr2 <- update(mr1, . ~ Machine + (1 | Worker / Machine))
>  mr3 <- update(mr1, . ~ Machine + (1 + Machine | Worker))
>  anova(mr1, mr2, mr3)
>
>  require(gmodels)
>  options(digits = 3)
>  set.seed(20080427)
>  zapsmall(ci(m2))
>  zapsmall(ci(mr2, sim.lmer = T, n.sim = 10000))
>  # The following produces SEs an order of magnitude larger ci(mr2)
>  zapsmall(ci(mr3, sim.lmer = T, n.sim = 10000))
>
>  # From this it would seem that mr2 is the proper model.
>  # But I have never seen a crossed design written that way in lme4.
>
>  # Another example:
>  # Baayen (in the page proofs of Analyzing Linguistic Data:
>  # A practical introduction to statistics, p. 283) takes an example from
>  # Raaijmakers, J. G. W.; Schrijnemakers, J. M. C. & Gremmen, F.
>  # How to Deal with ``The Language-as-Fixed-Effect Fallacy'':
>  # Common Misconceptions and Alternative Solutions
>  # Journal of Memory and Language, 1999, 41, 416-426
>
>  subj <- factor(rep(paste('S', 1:8, sep = ''), each = 8))
>  item <- factor(rep(paste('W', 1:8, sep = ''), 8))
>  soa <- factor(rep(c("Short", "Long"), each = 4, 8))
>  rt <- c(
>  546, 567, 547, 566, 554, 545, 594, 522,
>  566, 556, 538, 566, 512, 523, 569, 524,
>  567, 598, 568, 584, 536, 539, 589, 521,
>  556, 565, 536, 550, 516, 522, 560, 486,
>  595, 609, 585, 588, 578, 540, 615, 546,
>  569, 578, 560, 583, 501, 535, 568, 514,
>  527, 554, 535, 527, 480, 467, 540, 473,
>  551, 575, 558, 556, 588, 563, 631, 558)
>  sp <- data.frame(subj = subj, item = item, soa = soa, rt = rt)
>
>  # soa and subj are crossed.
>  with(sp, table(subj, item))
>  # item and subj also
>  with(sp, table(subj, soa))
>  # items nested under soa
>  with(sp, table(item, soa))
>  # pr2 is analogous to m2 and mr2
>  pr2 <- lmer(rt ~ soa + (1 | subj / soa) + (1 | item), sp)
>  # pr3 is analogous to m3 and mr3 This is how Baayen analyzes it
>  # (the results aren't identical to his; I don't know why):
>  pr3 <- lmer(rt ~ soa + (1 + soa | subj) + (1 | item), sp)
>  ci(pr2, sim.lmer = TRUE, n.sim = 10000)
>  ci(pr3, sim.lmer = TRUE, n.sim = 10000)
>  cm <- unique(model.matrix(~ soa, sp))
>  estimable(pr2, cm = cm, sim.lmer = TRUE, n.sim = 10000, conf.int = 0.95)
>  estimable(pr3, cm = cm, sim.lmer = TRUE, n.sim = 10000, conf.int = 0.95)
>
>  _____________________________
>  Professor Michael Kubovy
>  University of Virginia
>  Department of Psychology
>  USPS:     P.O.Box 400400    Charlottesville, VA 22904-4400
>  Parcels:    Room 102        Gilmer Hall
>          McCormick Road    Charlottesville, VA 22903
>  Office:    B011    +1-434-982-4729
>  Lab:        B019    +1-434-982-4751
>  Fax:        +1-434-982-4766
>  WWW:    http://www.people.virginia.edu/~mk9y/
>
>  _______________________________________________
>  R-sig-mixed-models at r-project.org mailing list
>  https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>




More information about the R-sig-mixed-models mailing list