[R-sig-ME] [R] lmer and method call
Douglas Bates
bates at stat.wisc.edu
Sat Dec 1 18:59:23 CET 2007
On Dec 1, 2007 10:08 AM, Dieter Menne <dieter.menne at menne-biomed.de> wrote:
> Douglas Bates <bates <at> stat.wisc.edu> writes:
>
> (lmer)
>
> > The default is PQL, to refine the
> > starting estimates, followed by optimization of the Laplace
> > approximation. In some cases it is an advantage to suppress the PQL
> > iterations which can be done with one of the settings for the control
> > argument.
>
> I had found out the hard way that it is often better to let PQL
> play the game rather loosely. Yet I never dared to tell someone, for fear
> the approximation could end up in the wrong slot,
> Any rules (beside trying variants) if I can trust such a result?
I'm not sure I understand the sense of your first statement. Do you
mean that you have found that you should use PQL or you should not use
PQL?
I would advise using the Laplace approximation for the final
estimates. At one time I thought it would be much slower than the PQL
iterations but it doesn't seem to be that bad.
I also thought that PQL would refine the starting estimates in the
sense that it would take comparatively crude starting values and get
you much closer to the optimum before you switched to Laplace.
However, because PQL is an algorithm that iterates on both the fixed
effects and the random effects with fixed weights, then updates the
weights, then goes back to the fixed effects and random effects, etc.
there is a possibility that the early weights can force poor values of
the fixed effects and later iterations do not recover.
I tend to prefer the Laplace approximation directly without any PQL
iterations. That is
method = "Laplace", control = list(usePQL = FALSE)
I would be interested in learning what experiences you or others have
had with the different approaches.
I am cc:ing this to the R-SIG-mixed-models list and suggest we switch
to that list only for further discussion.
More information about the R-sig-mixed-models
mailing list