[R-sig-ME] lme vs. lmer
desja004 at umn.edu
desja004 at umn.edu
Wed Sep 30 17:22:17 CEST 2009
Hi Raldo,
I think you have 3 options if you're looking for an empirically driven way
to do model simplification and remove fixed effects. There of course may be
others.
1) Likelihood ratio test using anova()
2) Model comparison with AIC
3) Model comparison with BIC
Pinheiro & Bates (2000) discuss their usage.
I am not sure which one is preferred but I do know that most people seem to
be a little leery of BIC and favor using AIC over it. Personally, I would
let theory dictate what terms to remove not p-values.
HTH,
Chris
On Sep 30 2009, Raldo Kruger wrote:
>Hi all, I've been following this thread since it's of interest to the
>analyses i'm currenty doing.
>My question is, how does one do model simplification with lmer (or
>glmer in my case) if there are no p-values (since p-values are used to
>determine which terms to drop, and the drop1 function does not work
>for glmer)?
>
>And Douglas, could you provide a working example of getting the
>p-values as you described, preferably with glmer (glmer does not have
>the REML=FALSE option)? I understand the 1st part of fitting two
>models, one with and one without the term of interest... So does it
>mean one has to do this for every term in order to get all the
>p-values?
>
>Many thanks, Raldo
>
>On 9/29/09, Douglas Bates <bates at stat.wisc.edu> wrote:
>> On Tue, Sep 29, 2009 at 3:58 PM, Peter Dalgaard
>> <p.dalgaard at biostat.ku.dk> wrote:
>>> Ben Bolker wrote:
>>>>
>>>> Douglas Bates wrote:
>>>>>
>>>>> On Tue, Sep 29, 2009 at 1:02 PM, Ben Bolker <bolker at ufl.edu> wrote:
>>>>>
>>>>>> Christopher David Desjardins wrote:
>>>>>>>
>>>>>>> I've started working through Pinheiro & Bates, 2000 and noticed the
>>>>>>> use
>>>>>>> of lme from the nlme package. I am curious if lmer from lme4 has
>>>>>>> superseded lme or if lme still holds its own? The reason I ask is
>>>>>>> that
>>>>>>> I
>>>>>>> have taken a few classes where we've solely used lmer and just read
>>>>>>> about lme today. If both functions are on equal footing, can the
>>>>>>> p-values from lme be trusted?
>>>>>>> Thanks!
>>>>>>> Chris
>>>>>>
>>>>>> You should read the extended discussion of p-values, degrees of
>>>>>> freedom, etc. that is on the R wiki (I think) and referenced from
>>>>>> the R
>>>>>> FAQ. At least in my opinion, (n)lme is still fine (and indeed
>>>>>> necessary
>>>>>> at this stage for fitting heteroscedastic and correlated models).
>>>>>> The
>>>>>> df/p-value estimates, however, are "use at your own risk" -- you'll
>>>>>> have
>>>>>> to read the literature and decide for yourself.
>>>>>> I still think there's room for someone to implement (at least)
>>>>>> Satterthwaite and (possibly) Kenward-Roger corrections, at least for
>>>>>> the
>>>>>> sake of comparison, but I'm not volunteering.
>>>>>
>>>>> You may need to define them first. Many of the formulas in the mixed
>>>>> models literature assume a hierarchical structure in the random
>>>>> effects - certainly we used such a formula for calculating the
>>>>> denominator degrees of freedom in the nlme package. But lme4 allows
>>>>> for fully or partially crossed random effects so you can't think in
>>>>> terms of "levels" of random effects.
>>>>>
>>>>> Referring to the "Satterthwaite and Kenward-Roger corrections" gives
>>>>> the impression that these are well-known formulas and implementing
>>>>> them would be a simple matter of writing a few lines of code. I
>>>>> don't
>>>>> think it is. I would be very pleased to incorporate such code if it
>>>>> could be written but, as I said, I don't even know if such things are
>>>>> defined in the general case, let alone easy to calculate.
>>>>>
>>>>> I am not trying to be argumentative (although of late I seem to have
>>>>> succeeded in being that). I'm just saying that I don't think this is
>>>>> trivial. (It I wanted to be argumentative I would say that it is
>>>>> difficult and, for the most part, irrelevant. :-)
>>>>
>>>> Fair enough. Actually, I'm not sure I meant implementing them in lmer
>>>> -- even implementing them in nlme would be useful (and perhaps more
>>>> straightforward, if not trivial). I also wouldn't impose the
>>>> requirement
>>>> that they have to be feasible for huge data sets -- I'm just curious if
>>>> they can be implemented within lme in a relatively straightforward/
>>>> boneheaded way.
>>>> But again, this is very far down my to-do list (and at the edge
>>>> of my abilities) and completely off yours, so unless someone else bites
>>>> it won't happen.
>>>
>>> I don't think (n)lme is all that easy either; you still need to sort out
>>> the
>>> connection between its multilevel formulation and the projection
>>> matrices
>>> in
>>> the K-R paper. In both nlme and lme4, an implementation is almost
>>> certainly
>>> possible, although probably complicated and perhaps at the expense of
>>> all
>>> computational efficiency.
>>>
>>> One main problem is that even when they can be calculated, the
>>> corrections
>>> rely on a normal distribution assumption which is more than likely wrong
>>> in
>>> practice. This isn't any different from ordinary t-tests: once you get
>>> into
>>> the single-digit df regime, you really don't know what you are doing,
>>> and
>>> if
>>> there are more than 30 df, the normal approximation works well enough
>>> without the correction.
>>>
>>> Accordingly, I tend to see low df more as a warning flag than as
>>> something
>>> that gives accurate p values, and I sometimes wonder whether there is a
>>> way
>>> to raise such a flag more expediently.
>>
>> I agree, wholeheartedly.
>>
>> My general advice to those who are required to produce a p-value for a
>> particular fixed-effects term in a mixed-effects model is to use a
>> likelihood ratio test. Fit the model including that term using
>> maximum likelihood (i.e. REML = FALSE), fit it again without the term
>> and compare the results using anova.
>>
>> The likelihood ratio statistic will be compared to a chi-squared
>> distribution to get a p-value and this process is somewhat suspect
>> when the degrees of freedom would be small. However, so many other
>> things could be going wrong when you are fitting complex models to few
>> observations that this may be the least of your worries.
>>
>> I appreciate that for Ben and others in fields like ecology the need
>> to incorporate many different possible terms in models for
>> comparatively small data sets may be inevitable. But it is also
>> inevitable that the precision of the information one can extract from
>> such small data sets is low. Reducing such analysis to a set of
>> p-values for various terms and treating these p-values as if they were
>> precisely determined is an oversimplification, even when journal
>> editors insist on such an oversimplification.
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>
>
More information about the R-sig-mixed-models
mailing list