[R-sig-ME] Questions on migrating from nlme to lme4
John Maindonald
John.Maindonald at anu.edu.au
Wed Jun 13 20:07:37 CEST 2007
Bill, it is a little over 24 hours since Peter Dalgaard
and myself were locked, though briefly, in conversation
about degrees of freedom in lme4.
More seriously, I am impressed with the strides that
you have been making with lme4.
John Maindonald.
> Thanks for your messages, Robert. You mentioned posting to the
> R-SIG-Mixed-Models list so I am taking the liberty of cc:ing the list
> on this reply.
>
> It is best to think of lme4 as a reimplementation of mixed models from
> the ground up. I have been seeking a way of generalizing the theory
> and computational methods for mixed models in a flexible, efficient
> and extensible way. At the risk of being immodest I will say that I
> think I have done so.
>
> For linear mixed models the big difference between lme and lmer is
> that lmer can handle models with crossed or partially crossed random
> effects efficiently. In areas like the analysis of scores on
> state-wide tests mandated by the No Child Left Behind act or in the
> analysis of "breeding values" by animal scientists, large data sets
> with a complex structure are the norm. The state-of-the-art methods
> for sparse matrices that are used in lmer make fitting mixed models to
> such data feasible.
>
> In the process of examining linear mixed models in detail I eventually
> arrived at a formulation of the linear mixed model that generalizes to
> generalized linear mixed models and nonlinear mixed models in a way
> that, to me at least, makes sense. The basic computational steps and
> the representation of the models are sufficiently similar to share a
> lot of the code.
>
> Thus the lme4 package is not an upgrade of the nlme package. It
> shares some characteristics with nlme but it is best to think of it as
> a reimplementation, not an extension.
>
> By the way, the theoretical development that I mentioned will be
> described in the vignette "Theory and Computational Methods for Mixed
> Models". The first few sections of that vignette are included in the
> lme4_0.99875-1 package and there is more to come. Sorry for releasing
> snapshots of a paper (or a book chapter, in this case). Because the
> document changes between releases of the package there is not a stable
> reference to cite. Once I have covered all the ground I want to cover
> I will release a stable version as a technical report.
>
> On 6/13/07, Robert Kushler <kushler at oakland.edu> wrote:
>
>> Follow-up to previous message (included below):
>
>> I discovered that "confint" can be used in place of "intervals", so item
>> 2
>> is taken care of.
>
> There is a "confint" method for "lmList" objects (also discussed
> below). I'm not sure I will produce one for "lmer" objects if the
> degrees of freedom controversy is not resolved. Also I don't think
> that symmetric intervals on variance components make sense and would
> prefer not to produce them.
>
> Generally I recommend using mcmcsamp to produce "confidence intervals"
> on the variance components. Approximate (and symmetric) confidence
> intervals on the fixed effects are reasonably accurate but such
> intervals for the random effects can be poor approximations.
>
>> I also gather that you allow (expect?) users to use the
>> two libraries simultaneously. I assume you recommend loading nlme
>> first,
>> so the lmer version of common functions will be used.
>
> No, I don't think it is a good idea to use nlme and lme4
> simultaneously. At one point I had code in lme4 to give a warning if
> you tried to load the lme4 package when the nlme package was already
> loaded.
>
>> The windows version of 0.99875-1 appeared today.
>
> Good.
>
>> I have a small bit of user feedback: lmList requires the "data"
>> argument
>> even when all of the needed variables exist in the global environment.
>> I know it's good practice to construct a data frame (and advantageous to
>> exploit the groupedData structure), but since lmer lets the user be
>> lazy,
>> why doesn't lmList do the same?
>
> I haven't looked at that version of lmList for a long time. I suppose
> that I am not generating a call to 'model.frame' and I should. I can
> put that on the "To Do" list but it will be a long way down the list.
>
> The answer to another of your questions about using groupedData
> objects is that I do not encourage the use of groupedData objects with
> lme4. They were an attempt to incorporate information on the grouping
> structure with the data but they got too complicated. I'm in a "keep
> it simple" mode in the development of the lme4 package. Also, the
> groupedData objects use S3 classes and one of the purposes of the lme4
> package is to explore the use of S4 classes and methods.
>
> Another aspect of the groupedData objects is that the formulation,
> like the formulation of lme itself, makes sense for nested groupings
> but not for crossed or partially crossed groupings.
>
> To some extent the purpose of defining the groupedData classes was to
> make it easier to generate meaningful and informative graphics with
> the trellis functions in S-PLUS. Deepayan Sarkar's development of
> lattice package for R has incorporated so many new features that many
> of the operations that required custom code in trellis are possible
> transparently in lattice.
>
>> Thanks again for whatever time you can spare, Rob Kushler
>
>> Text of previous message:
>
>> I am belatedly transitioning to lme4 and have some questions. Some of
>> the following are things I used to show my students with the nlme
>> library
>> but have not yet figured out how to do with lme4. If some of this is on
>> your to-do list, perhaps you could post a message to [R-sig-ME] about
>> your
>> plans. (I am eagerly awaiting the appearance of the 0.99875-1 zip
>> file.)
>
> Ahem, I believe you mean the nlme *package*. (Bear in mind that
> Martin Maechler reads this list. :-)
>
>> My general question is whether any of the following can be done in the
>> current
>> version of lme4 (and if so how), or if these features might be added
>> ("soon",
>> not so soon, or never). I know that I can show my students a "mixed"
>> (sic)
>> strategy using both the old and new libraries, but that's a bit awkward,
>> and
>> I'd like to convince them (and myself) that we can someday leave nlme
>> behind
>> with no (or few) regrets.
>
>> If you have addressed these issues elsewhere, forgive me for not
>> locating it.
>
>> 1) The "correlation" and "weights" arguments of gls and lme were very
>> useful.
>> In particular, I'd like to fit the AR1 and "unstructured" models,
>> and be
>> able to "model heteroscedasticity".
>
> The correlation and weights arguments haven't made it past the "keep
> it simple" filter yet.
>
>> 2) I understand your position on Wald inference for variance components,
>> but
>> the "intervals" function had other uses. In particular, I like to
>> show
>> the students "plot(intervals(lmListobject)) as a diagnostic tool.
>> Do you
>> no longer recommend this?
>
> I do think that a plot of the confidence intervals from an lmList
> object is a good idea. (For one thing, those confidence intervals are
> well-defined.) I have code that looks like
>
> ###################################################
> ### chunk number 3: Sl1
> ###################################################
> print(plot(confint(lmList(Reaction ~ Days | Subject, sleepstudy),
> pooled = TRUE), order = 1))
>
> and it appears to work in current versions of lme4.
>
>> 3) Do you plan to give the inference tools in "languageR" an official
>> blessing
>> by incorporating them into lme4?
>
> I certainly think they are a good idea and I worked with Harald Baayen
> a bit on the development of them last January when I visited with him.
> However, I still want to think through the inference issues before
> incorporating my final word about inference in the lme4 package. One
> of the annoying aspects others find in working with me is that I am
> very slow on design and theoretical development. I like to think that
> the good side of this is that I am thorough.
>
> The lme4 package and the theoretical work behind it have had a long
> gestation period - one that probably will extend for a long time to
> come. However, I like the results and I am comfortable with the idea
> that I have carefully considered many, many aspects of the model and
> computational methods before incorporating them.
>
>> 4) Your R-news article in May 2005 shows "anova" producing F's and
>> p-values,
>> but that no longer seems to be the case.
>
> Indeed. I realize that calculation of p-values is immensely important
> to some people but doing it properly for unbalanced designs is
> extremely complicated and, in the areas that I am most concerned with
> (large data sets with complex structure), it's a non-issue. When you
> have data on a half-million students in 2500 schools whatever
> calculation you use for the degrees of freedom will produce a number
> so large that the exact number is irrelevant.
>
> If someone wants to work out how to calculate an approximate
> denomination degrees of freedom, and they can get the "degrees of
> freedom police" (John Maindonald and Peter Dalgaard) to approve the
> result, I'll put the degrees of freedom back in. Until then, you just
> have to do what several people in the psycholinguistics area have
> done, which is to uncomment the lines for the denominator degrees of
> freedom upper bound in the code for the anova method.
>
>> In this connection, do you have
>> plans to implement the "potentially useful approximate D.F."
>> mentioned
>> in mcmcsamp.R?
>
> I'm not sure what code you are referring to there. Is it in the
> languageR package?
>
> I imagine the approximation in question is from the trace of the "hat"
> matrix. If you look at the Theory vignette you will see I view the
> basic calculation in mixed models as the solution of a penalized least
> squares problem. For an unpenalized least squares problem the
> residual degrees of freedom is n minus the trace of the hat matrix.
> Frequently in penalized least squares problems the same approximation
> is used.
>
> That quantity also appears in the definition of the Deviance
> Information Criterion (DIC) by Spiegelhalter et al. for this model.
> However, it does not reproduce the results from balanced experiments
> with small data sets and I am beginning to have some doubts about
> whether this calculation is the best way to go.
>
>> 5) Do you no longer recommend the use of groupedData objects, or do you
>> plan
>> to add them to lme4?
>
> See above.
>
>> I'm sure I'll think of additional questions, but that's (more than)
>> enough
>> for now.
>
>> Thanks and regards, Rob Kushler (Oakland University)
>
> Thanks for your questions. I hope my answers were helpful.
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list