[R-sig-ME] degrees of freedom in mixed model
Muldoon, Ariel
Ariel.Muldoon at oregonstate.edu
Fri Jan 24 23:44:20 CET 2014
A small addition to the discussion: I was recently reading Stroup's "Generalized Linear Mixed Models", which discusses this degrees of freedom issue for LMM's a bit. For models outside the more "classical" paradigm, it seems that the Kenward-Roger correction controls the type I error rate but the Satterthwaite correction does not (although I did not go on to read the original papers on the subject).
Ariel
-----Original Message-----
From: r-sig-mixed-models-bounces at r-project.org [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of Ben Bolker
Sent: Friday, January 24, 2014 10:20 AM
To: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] degrees of freedom in mixed model
On 14-01-24 01:07 PM, Jake Westfall wrote:
> Meh... My feeling is that the amount of controversy on this point is
> rather more limited than S Ellison lets on. Of course Bates is
> (famously, at this point) deeply skeptical about the approximate
> degrees of freedom approaches, but I get the impression that few of
> the rest of us who have spent some time thinking about the matter have
> any sort of strong feelings about it. The Satterthwaite method
> (implemented in lmerTest) is widely used, well understood, and
> basically seems to work quite well for controlling error rates in most
> cases, based on simulations. I think if you are wanting to scrutinize
> your model and the tests of the coefficients therein, there are far
> bigger fish to fry than worrying about the issue of approximate DFs
> vs. bootstrapping vs. MCMC vs. ... Jake
I more or less agree. The issue with F distributions, degrees of freedom, etc etc., is mostly a problem with complex designs that don't fit into the classical method-of-moments/ANOVA paradigm (R-side effects [which lme4 doesn't do yet], crossed and partially crossed random effects, etc.). In simple cases (as in the example here), the results of (restricted) ML analyses should more or less line up with the classical results. In addition to lmerTest, as pointed out by Søren Hojsgaard in the original thread on r-help, the Kenward-Roger approximation is available in the PBKRtest package ...
If you asked me about 'denominator df' calculations for GLMMs I would be considerably more pessimistic ...
Schaalje, G., J. McBride, and G. Fellingham. 2002. "Adequacy of Approximations to Distributions of Test Statistics in Complex Mixed Linear Models." Journal of Agricultural, Biological & Environmental Statistics 7 (14): 512-24.
http://www.ingentaconnect.com/content/asa/jabes/2002/00000007/00000004/art00004.
>> From: jbaldwin at fs.fed.us To: S.Ellison at LGCGroup.com;
>> iaingallagher at btopenworld.com; r-sig-mixed-models at r-project.org
>> Date: Fri, 24 Jan 2014 15:40:32 +0000 Subject: Re: [R-sig-ME] degrees
>> of freedom in mixed model
>>
>> S Ellison: Despite you being a chemist, I think you're at least
>> mostly correct. But from the construction of my statement, it's
>> obvious that I am a statistician and I'm allowed, by law, to be wrong
>> 5% of the time. And if I claim to be a Frequentist, I don't even
>> have to identify which of my particular statements are incorrect.
>>
>> Jim
>>
>> Jim Baldwin Station Statistician Pacific Southwest Research Station
>> USDA Forest Service
>>
>> -----Original Message----- From:
>> r-sig-mixed-models-bounces at r-project.org
>> [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of S
>> Ellison Sent: Friday, January 24, 2014 5:40 AM To: Iain Gallagher;
>> r-sig-mixed-models at r-project.org Subject: Re: [R-sig-ME] degrees of
>> freedom in mixed model
>>
>>
>>> library(lme4) model1 <- lmer(value~group + (1|animal), data=bip)
>>> summary(model1)
>>>
>> .......
>>> so I'd then have:
>>>
>>> qf(0.95,3,5) or qf(0.95,3,4)
>>>
>>> for my critical F value?
>>>
>>> Any advice (incuding whether the appraoch is right) would be useful.
>>
>> It's the wrong approach.
>>
>> You are using lmer, which uses maximum likelihood estimation, not
>> classical sums of squares. The degrees of freedom don't mean the same
>> thing, and the distribution of REML estimates of variance isn't
>> necessarily chi-squared. So F is interpretable in the same way as it
>> would be in classical anova.
>>
>> If you want p-values from an lmer model, you could get hold of the
>> lmerTest package. Other recommended approaches include variants on
>> MCMC. There is a great deal of controversy on this point, though; try
>> Googling "p-values from lmer" with particular attention to anything
>> by Douglas Bates (the package author). You _should_ find enough to
>> make you worry that the method used by lmerTest (which as I
>> understand it implements a method used by SAS) comes with quite
>> strong theoretical objections. I am quite sure the lmerTest authors
>> know that perfectly well and offer lmerTest as a package for those
>> who want to find out or for those whose management insist on a
>> SAS-compatible answer. But if I read correctly, that doesn't make it
>> the right thing to do
>>
>> [Caveat - I'm a chemist. I could be wrong about this]
>>
>>>
>>> Best
>>>
>>> iain
>>>
>>> [[alternative HTML version deleted]]
>>
>>
>>
>> *******************************************************************
>>
>>
This email and any attachments are confidential. Any u.....{{dropped:11}}
More information about the R-sig-mixed-models
mailing list