[R] lme vs. lmer results

Dimitri Liakhovitski dimitri.liakhovitski at gmail.com
Tue Oct 26 23:45:40 CEST 2010


Thanks a lot, Douglas. It's very heplful.

A clarification question about specifying the model in lmer. You said
it should be:
mix.lmer <- lmer(DV ~a+b+c+d+(e+f+g+h+ii|group), mydata)

I assume it was a typo and you meant that the last predictor in
brackets should be i (rather than ii), right?

Also you said: "I wouldn't recommend it though as this requires
estimating  21 variance and covariance parameters for the random
effects.  Almost certainly the estimated variance-covariance matrix
will end up being singular.  Unless you are careful you may not notice
this."

Question: What would you recommend one to do in the following
situation: I have quite a few predictors that are all fixed (and are
usually estimated using simple OLS regression). But now I have the
same situation, however, my observations come from different groupings
(the same number of observations per grouping, the same set of
predictors and the same DV in all groupings). I thought I would use
the factor "group" to define the random effects model - because I
didn't want predictor coefficients to vary like crazy from group to
group - I wanted them to be "anchored" in the pooled model's predictor
coefficients. But if it's not feasible mathematically - what should
one do?
Maybe run a bunch of models like this:
mix.lmer.e <- lmer(DV ~a+b+c+d+ (e|group)+ f+g+h+i, mydata)
mix.lmer.f <- lmer(DV ~a+b+c+d+ e + (f|group)+ g+h+i, mydata)
mix.lmer.g <- lmer(DV ~a+b+c+d+ e + f + (g|group) + h+i, mydata)
etc.
Look at the random effect for each and every predictor in question?
But would it be right?

My ultimate goal - is to accurately estimate the original regression
model (DV ~a+b+c+d+e+f+g+h+i) - but for each of the different
groups...

Thanks a lot for your advice!
Dimitri


On Tue, Oct 26, 2010 at 3:57 PM, Douglas Bates <bates at stat.wisc.edu> wrote:
> On Tue, Oct 26, 2010 at 12:27 PM, Dimitri Liakhovitski
> <dimitri.liakhovitski at gmail.com> wrote:
>> Hello,
>> and sorry for asking a question without the data - hope it can still
>> be answered:
>
>> I've run two things on the same data:
>
>> # Using lme:
>> mix.lme <- lme(DV ~a+b+c+d+e+f+h+i, random = random = ~ e+f+h+i|
>> group, data = mydata)
>
>> # Using lmer
>> mix.lmer <- lmer(DV
>> ~a+b+c+d+(1|group)+(e|group)+(f|group)+(h|group)+(i|group), data =
>> mydata)
>
> Those models aren't the same and the model for lmer doesn't make
> sense.  You would need to write the random effects terms as
> (0+e|group), etc. because (e|group) is the same as (1 + e|group) so
> you are including (Intercept) random effects for group in each of
> those 5 terms.
>
> To generate the same model as you fit with lme you would use
>
> mix.lmer <- lmer(DV ~a+b+c+d+(e+f+g+h+ii|group), mydata)
>
> I wouldn't recommend it though as this requires estimating  21
> variance and covariance parameters for the random effects.  Almost
> certainly the estimated variance-covariance matrix will end up being
> singular.  Unless you are careful you may not notice this.
>
>> lme provided an output (fixed effects and random effects coefficients).
>
> lme is not as picky about singularity of the variance-covariance
> matrix as lmer is.
>
>> lmer gave me an error: Error in mer_finalize(ans) : Downdated X'X is
>> not positive definite, 10.
>> I've rerun lmer with - but specifying the random effects for 2 fewer
>> predictors. This time it ran and provided an output. (BTW, the random
>> effects of lmer with 2 fewer predictors specified as random were very
>> close to the output of lme).
>
> Yes, lmer could converge in such as case but the parameter estimates
> are not meaningful because of the ambiguity described above.
>
>> Question:
>> Looks like lmer could not invert the matrix, right?
>
> Well, lmer never tries to invert matrices but it does factor them and
> that is where the problem is recognized. However, I think that
> singularity is a symptom of the problem, not the cause.
>
>> But how come lme
>> (which I thought was an earlier version of lmer) COULD invert it?
>
> The computational methods in the two packages are quite different.  I
> think that the methods in lme4 are superior because we have learned a
> bit in the last 10 years.
>
>
>> Greatly appreciate a clarification!
>>
>>
>> --
>> Dimitri Liakhovitski
>> Ninah Consulting
>> www.ninah.com
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>



-- 
Dimitri Liakhovitski
Ninah Consulting
www.ninah.com



More information about the R-help mailing list