Fwd: [R] Enduring LME confusion… or Psychologists and Mixed-Effects

John Maindonald john.maindonald at anu.edu.au
Thu Aug 12 00:28:07 CEST 2004


In my undertstanding of the problem, the model
   lme1 <- lme(resp~fact1*fact2, random=~1|subj)
should be ok, providing that variances are homogenous both between & 
within subjects.  The function will sort out which factors & 
interactions are to be compared within subjects, & which between 
subjects.  The problem with df's arises (for lme() in nlme, but not in 
lme4), when random effects are crossed, I believe.

It is difficult to give a general rule on the effect of imbalance; it 
depends on the relative contributions of the two variances and the 
nature of the imbalance.  There should be a rule that people who ask 
these sorts of questions are required to make their data available 
either (if the data set is small) as part of their message or (if data 
are extensive) on a web site.  Once the results of the analysis are on 
display, it is often possible to make an informed guess on the likely 
impact.  Use of simulate.lme() seems like a good idea.
John Maindonald.

On 11 Aug 2004, at 8:05 PM, r-help-request at stat.math.ethz.ch wrote:

> From: Spencer Graves <spencer.graves at pdf.com>
> Date: 10 August 2004 8:44:20 PM
> To: Gijs Plomp <gplomp at brain.riken.jp>
> Cc: r-help at stat.math.ethz.ch
> Subject: Re: [R] Enduring LME confusion.... or Psychologists and 
> Mixed-Effects
>
>
>      Have you considered trying a Monte Carlo?  The significance 
> probabilities for unbalanced anovas use approximations.  Package nlme 
> provides "simulate.lme" to facilitate this.  I believe this function 
> is also mentioned in Pinheiro and Bates (2000).
>      hope this helps.  spencer graves
> p.s.  You could try the same thing in both library(nlme) and 
> library(lme4).  Package lme4 is newer and, at least for most cases, 
> better.
> Gijs Plomp wrote:
>
>> Dear ExpeRts,
>>
>> Suppose I have a typical psychological experiment that is a 
>> within-subjects design with multiple crossed variables and a 
>> continuous response variable. Subjects are considered a random 
>> effect. So I could model
>> > aov1 <- aov(resp~fact1*fact2+Error(subj/(fact1*fact2))
>>
>> However, this only holds for orthogonal designs with equal numbers of 
>> observation and no missing values. These assumptions are easily 
>> violated so I seek refuge in fitting a mixed-effects model with the 
>> nlme library.
>> > lme1 <- lme(resp~fact1*fact2, random=~1|subj)
>>
>> When testing the 'significance’ of the effects of my factors, with 
>> anova(lme1), the degrees of freedom that lme uses in the denominator 
>> spans all observations and is identical for all factors and their 
>> interaction. I read in a previous post on the list ("[R] Help with 
>> lme basics") that this is inherent to lme. I studied the instructive 
>> book of Pinheiro & Bates and I understand why the degrees of freedom 
>> are assigned as they are, but think it may not be appropriate in this 
>> case. Used in this way it seems that lme is more prone to type 1 
>> errors than aov.
>>
>> To get more conservative degrees of freedom one could model
>> > lme2 <- lme(resp~fact1*fact2, random=~1|subj/fact1/fact2)
>>
>> But this is not a correct model because it assumes the factors to be 
>> hierarchically ordered, which they are not.
>>
>> Another alternative is to model the random effect using a matrix, as 
>> seen in "[R] lme and mixed effects" on this list.
>> > lme3 <- (resp~fact1*fact2, random=list(subj=pdIdent(form=~fact1-1), 
>>  subj=~1,  fact2=~1)
>>
>> This provides 'correct’ degrees of freedom for fact1, but not for the 
>> other effects and I must confess that I don't understand this use of 
>> matrices, I’m not a statistician.
>>
>> My questions thus come down to this:
>>
>> 1. When aov’s assumptions are violated, can lme provide the right 
>> model for within-subjects designs where multiple fixed effects are 
>> NOT hierarchically ordered?
>>
>> 2. Are the degrees of freedom in anova(lme1) the right ones to 
>> report? If so, how do I convince a reviewer that, despite the large 
>> number of degrees of freedom, lme does provide a conservative 
>> evaluation of the effects? If not, how does one get the right denDf 
>> in a way that can be easily understood?
>>
>> I hope that my confusion is all due to an ignorance of statistics and 
>> that someone on this list will kindly point that out to me. I do 
>> realize that this type of question has been asked before, but think 
>> that an illuminating answer can help R spread into the psychological 
>> community.
>>
John Maindonald             email: john.maindonald at anu.edu.au
phone : +61 2 (6125)3473    fax  : +61 2(6125)5549
Centre for Bioinformation Science, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.




More information about the R-help mailing list