[R] Fitting mixed-effects models with lme with fixed error term variances
bates at stat.wisc.edu
Sat Nov 25 22:23:47 CET 2006
On 11/22/06, Gregor Gorjanc <gregor.gorjanc at bfro.uni-lj.si> wrote:
> Douglas Bates wrote:
> > On 11/22/06, Gregor Gorjanc <gregor.gorjanc at bfro.uni-lj.si> wrote:
> >> Douglas Bates wrote:
> >> > On 11/21/06, Gregor Gorjanc <gregor.gorjanc at bfro.uni-lj.si> wrote:
> >> >> Douglas Bates <bates <at> stat.wisc.edu> writes:
> >> >> ...>
> >> >> > Can you be more specific about which parameters you want to fix and
> >> >> > which you want to vary in the optimization?
> >> >>
> >> >> It would be nice to have the ability to fix all variances i.e.
> >> >> variances of
> >> >> random effects.
> >> >
> >> > effects but allow the variance of a slope for the same grouping factor
> >> > to be estimated. Well, you could but only in the fortune("Yoda")
> >> > sense.
> >> >
> >> Yes, I agree here. Thank you for the detailed answer.
> >> > By the way, if you fix all the variances then what are you optimizing
> >> > over? The fixed effects? In that case the solution can be calculated
> >> > explicitly for a linear mixed model. The conditional estimates of the
> >> > fixed effects given the variance components are the solution to a
> >> > penalized linear least squares problem. (Yes, the solution can also
> >> > be expressed as a generalized linear least squares problem but there
> >> > are advantages to using the penalized least squares representation.
> >> Yup. It would really be great to be able to do that nicely in R, say use
> >> lmer() once and since this might take some time use estimates of
> >> variance components next time to get fixed and random effects. Is this
> >> possible with lmer or any related function - not in fortune("Yoda")
> >> sense ;)
> > Not quite. There is now a capability in lmer to specify starting
> > estimates for the relative precision matrices which means that you can
> > use the estimates from one fit as starting estimates for another fit.
> > It looks like
> > fm1 <- lmer(...)
> > fm2 <- lmer(y ~ x + (...), start = fm1 at Omega)
> > I should say that in my experience this has not been as effective as I
> > had hoped it would be. What I see in the optimization is that the
> > first few iterations reduce the deviance quite quickly but the
> > majority of the time is spent refining the estimates near the optimum.
> > So, for example, if it took 40 iterations to converge from the rather
> > crude starting estimates calculated within lmer it might instead take
> > 35 iterations if you give it good starting estimates.
> Yes, I also have the same experience with other programs, say VCE.
> What I was hopping for was just solutions from linear mixed model i.e.
> either via GLS or PLS representations and no optimization if values for
> variance components are passed as input - there should be a way to
> distinguish that from just passing starting values..
The PLS representation in lmer is in terms of the relative
variance-covariance matrices of the random effects where "relative"
means relative to s^2. (In fact, the Omega slot contains the relative
precision matrices (inverses of the relative variance matrices) but
its the same idea. All variance components are expressed relative to
s^2.) If it is sufficient to fix these then you can easily do that.
Just follow the code in lmer until it gets to
.Call(mer_ECMEsteps, mer, cv$niterEM, cv$EMverbose)
LMEoptimize(mer) <- cv
and replace the first two lines by
.Call("mer_factor", mer, PACKAGE = "lme4")
A side-effect of performing the factorization is to evaluate the ML
and REML deviances and store the result in the deviance slot.
The "follow the code in lmer" part will get a bit tricky because of
the number of functions that are hidden in the lme4 namespace but I'm
sure you know how to get around that.
More information about the R-help