[R-sig-ME] lme() vs aov()

Peter Dalgaard P.Dalgaard at biostat.ku.dk
Mon May 26 15:20:32 CEST 2008


Federico Calboli wrote:
> Peter Dalgaard wrote:
>> The gut reaction is that you shouldn't trust lme() in low-df cases,
>> but in this particular case the issue is different:
>>
>>  > summary(mod.aov)
>>
>> Error: source
>>          Df Sum Sq Mean Sq F value   Pr(>F)  drug       2 61.167 
>> 30.583  61.167 0.003703 **
>> Residuals  3  1.500   0.500                   ---
>> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>>
>> Error: Within
>>          Df Sum Sq Mean Sq F value Pr(>F)
>> Residuals  6    9.0     1.5             Notice that the Residuals
>> Mean Sq is larger in the Within stratum than in the source stratum.
>> In terms of a mixed-effects model, this implies a negative estimate
>> for the variance of the source effect. lme() will have nothing of
>> that and sets it to zero instead. If you drop the Error(source) you
>> get the same F as in lme() although the df differ.
>>
>> (The  "negative variance" can be interpreted as negative
>> within-source correlation, but that only works properly for balanced
>> designs. Long story...)
>
> Thank you Peter for the explanation. I'm perfectly happy about this
> particular model, but I'd like to ask you (and everyone else who'd
> like to chime in), what do you mean with "you shouldn't trust lme() in
> low-df cases"? Why?
>
> (I ask because I often have low-df analyses to do).
>
Cynics will say that you shouldn't trust low-df analysis, period!
Effectively, the F tests are corrected chisq/f statistics, but the
correction depends crucially on the normal distribution, notably its 3rd
and 4th moment. If the correction is small, you can probably assume that
it will be small in non-normal cases too, apply the correction that is
right for normally distributed data and hope for the best, but if it is
large, then all bets are off.

For instance, looking at tests with 5 and 20 denominator df

> qchisq(.95,3)/3
[1] 2.604909
> qf(.95,3,5)
[1] 5.409451
> qf(.95,3,20)
[1] 3.098391

> 1-pf(qchisq(.95,3)/3,3,5)
[1] 0.1642641
> 1-pf(qchisq(.95,3)/3,3,20)
[1] 0.08017612

For unbalanced data, and even sometimes in the balanced case (when
effects "cross" error strata), the "F" statistic is not F distributed,
but there are some approximation methods involving "approximate
denominator df". SAS has three of these (containment, Satterthwaite,
Kenward-Rogers), lme() has only the
containment method, and lmer() has none (it has the asymptotic chisq,
plus the mcsamp() approach, which is still "in the works").

Of the three df correction methods, "containment" is quite easily fooled
into giving unreasonably large df even in balanced cases, SAS's
"Satterthwaite" is based on some rather dubious hacks, and only
Kenward-Rogers appears to be on a reasonably sound theoretical footing.
All in my humble opinion, that is.

It would be nice to have Kenward-Rogers implemented for lme() and
lmer(), but it does not mesh well with the computational technology used
to fit the models, so it is not as easy to do as it may sound. It is in
my opinion a problem, even in the light of the limited usefulness of the
improved approximation. The object is often not to get the p value right
to some number of significant digits but rather to have something that
flags tests as unreliable.

 

-- 
   O__  ---- Peter Dalgaard             Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics     PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark      Ph:  (+45) 35327918
~~~~~~~~~~ - (p.dalgaard at biostat.ku.dk)              FAX: (+45) 35327907




More information about the R-sig-mixed-models mailing list