[R-sig-ME] Fwd: same old question - lme4 and p-values

Andrew Robinson A.Robinson at ms.unimelb.edu.au
Sat Apr 12 23:18:59 CEST 2008


On Sat, Apr 12, 2008 at 02:02:09PM +0200, Reinhold Kliegl wrote:
> On Fri, Apr 11, 2008 at 3:10 PM, Kevin E. Thorpe
> <kevin.thorpe at utoronto.ca> wrote:
> > This has been a very interesting thread.  However, I'm still
> >  wrestling with what to do for a fixed-effect that has more than
> >  one degree of freedom.
> >
> >  In the data I'm analyzing, I have three groups to compare.
> >
> >  So, I can get CIs for the two parameters, but that is a bit
> >  problematic for assessing an overall difference.
> >
> >  Is it valid to do the following?  Estimate the parameters using both
> >  ML and REML.  If the estimates show good agreement, is that sufficient
> >  evidence to conclude the ML procedure is converging and that I can
> >  use a likelihood ratio test for the fixed effect?
> >
> I assume you refer to using anova(fm1, fm2) with fm1 fitting the model
> without the fixed effect. This a comparison of nested models, so a
> likelihood ratio test can be defined for ML fits only. Note, however,
> that Pinheiro & Bates (2000, p. 87, 2.4.2) "do not recommend using
> such tests"; "not" is set in bold face. They show that such tests tend
> to be anti-conservative, especially if the number of parameters
> removed is large relative to the number of observations. Assuming you
> have a decent number of total observations, you may be fine.
> Alternatively, you may want to run a simulation for your situation;
> you will also find R-code examples in the P&B section.

I agree with Reinhold's position, here.  I also note in passing that
Doug uses this strategy to test the fixed effects in the cake data
(see ?cake).  Doug, does the cake data analysis represent a softening
on your position or a place-filler?
 
> My first reaction to your email was: Why is he only interested in the
> overall effect of a fixed factor and not in specific comparisons
> between its levels? After Andrew's comment to an earlier post, I
> understand that there are such situations where you just want to
> control for an aspect of the design that is not in the focus of your
> theoretical concerns (e.g., in ecology you may have three sites that
> could be characterized as levels of a fixed factor or as a sample from
> a random factor). Perhaps  your fixed factor may also be better
> conceptualized as a random factor. In a way, you just want to control
> for the variance contributed by this factor. If this applies to your
> data, then you may be better off to specify your fixed factor as a
> random factor. Then, your anova(fm1, fm2) compares nested models that
> differ only in the random-effects part. In this case the likelihood
> ratio test can be used with models fit by REML. These tests tend to be
> conservative (Pinheiro & Bates, 2000, p. 2.4.1; following up on Stram
> & Lee, 1994). So if your ANOVA statistic is significant, you are on
> the save side; if not, you do not know. Also keep in mind, that random
> effects with few units may generate problems for model convergence.

That's an interesting idea, even if the interpretation is intended to
be a fixed factor.  It might work to a certain order of approximation,
but I'm not clear how the math would play out.  Some simulations might
provide a measure of comfort in individual situations.

Best wishes,

Andrew

-- 
Andrew Robinson  
Department of Mathematics and Statistics            Tel: +61-3-8344-6410
University of Melbourne, VIC 3010 Australia         Fax: +61-3-8344-4599
http://www.ms.unimelb.edu.au/~andrewpr
http://blogs.mbs.edu/fishing-in-the-bay/




More information about the R-sig-mixed-models mailing list