[R-sig-ME] Implausible estimate of random effects variance in lmer (lme4 devel version), sensible estimate in CRAN version

Marko Bachl marko.bachl at uni-hohenheim.de
Fri Sep 20 13:02:43 CEST 2013


I ran some more tests that hopefully may help you to understand the
source of the divergence.

The model works fine in the devel version with only one random
parameter in each group. That is

m = lmer(rtr2 ~ 1 + (1 | kombiid) + (1 | turnid) + (1 | idnr), verbose = T, d1)
m = lmer(rtr2 ~ 0 + turnsec + (0 + turnsec | kombiid) + (0 + turnsec |
turnid) + (0 + turnsec | idnr), verbose = T, d1)

are giving the same sensible results as the CRAN version or the devel
version with "optimizer = "bobyqa".

I can replicate the general problem for models with more random
parameters in each group. That is

m = lmer(rtr2 ~ 0 + turnsec + turnsec_2 + (0 + turnsec + turnsec_2 |
kombiid) + (0 + turnsec + turnsec_2 | turnid) + (0 + turnsec +
turnsec_2 | idnr), verbose = 2, d1)

where turnsec_2 is turnsec^2 (i.e. a quadratic growth term over time)
gives an implausible large variance estimate for turnsec_2 in turnid.

m = lmer(rtr2 ~ poly(turnsec, 2) + (poly(turnsec, 2) | kombiid) +
(poly(turnsec, 2) | turnid) + (poly(turnsec, 2) | idnr), verbose = 2,
d1)

gives a sensible estimate of the variance of the random intercepts but
implausible large variance estimates for both poly(turnsec, 2)
parameters in turnid.


Also, I can replicate all errors when I use the same models on a
different but structurally equal data set (the ratings for the other
candidate of the televised debate).

Finally, I looked at the variances var(ranef(m)$turnid) for the models
fitted with the devel version. The variances are near the estimates of
the sensible results of the CRAN version. I know that these results
are not identical to the variance estimates given in summary(m), but I
guess they should be approx. equal? Or am I getting something wrong
here?

One last thing: I also removed the grouping factor "kombiid" from the
models, but that does not change the behavior described above.

Concerning Kevins advice to "adjust the settings  of nelder_mead": I
really don't understand how the optimizers work, so I am not sure
which setting should or should not be adjusted. If you give me some
more advice, I would happily try some settings.

Best regards
Marko





2013/9/20 Ben Bolker <bbolker at gmail.com>:
> Kevin Wright <kw.stat at ...> writes:
>
>>
>> > Or is
>> > it easier just to accept that some models work with some optimizers
>> > and not with others?
>> >
>>
>> I've spent a lot of time comparing results from SAS, asreml, nlme and lme4
>> (old/new).   Sometimes you just get different results, no matter how hard
>> you try.  Your example is the 2nd or 3rd case that I've seen where
>> nelder_mead is giving substantially different results from bobyqa.  It
>> would be great if you could investigate your data and see if you can adjust
>> the settings  of nelder_mead to get better results.  This would be useful
>> information.
>>
>> Kevin
>>
>
>   I don't have an answer yet, but I've posted this as
>
> https://github.com/lme4/lme4/issues/130
>
> ...
>
> Kevin, I wonder if you can comment on the divergence among the
> other platforms I'm less familiar with (SAS/AS-REML) as well as
> nlme/lme4[old]/lme4[new] ... do you get the sense that new-lme4
> is an outlier, or is it just more general variation?  (I'm not
> asking for anything precise, just your sense of the problem ...)
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models



-- 
www.komm.uni-hohenheim.de/bachl



More information about the R-sig-mixed-models mailing list