[R-sig-ME] Slight differences in fitted coefficients in lme4_1.0-6 compared to lme4_0.999999-2
Ben Bolker
bbolker at gmail.com
Sat Feb 8 02:57:18 CET 2014
This is all very helpful, and reinforces our decision (mentioned by
Steve Walker) to switch to a bobyqa default in an imminent release. We
had hoped to do more systematic testing, but anecdotal evidence from
many users is better than anecdotal evidence just from the problems the
authors have stumbled across. If there are users out there who have
encountered the *opposite* scenario (Nelder-Mead works better than
bobyqa) we'd love to hear it, but we know this is harder to detect
(because N-M is the default, it is more likely that people will notice
problems with N-M and switch to bobyqa than the opposite).
Tom: default contrasts haven't changed, so I don't know what's up with
your intercept terms. Maybe options(contrasts=...) was set formerly?
You could check attributes(old_lme4_model at frame) (in lme4.0) or
attributes(model.frame(new_lme4_model)) (in lme4) to compare ...
If you are an experienced user it would be great if you could try the
most recent development version (via
devtools::install_github("lme4","lme4") and report unusual or
interesting results to the list ... we're particularly interested in (1)
bobyqa glitches and (2) obvious false positive warnings from the new
convergence testing code -- especially examples of singular fits that
report large gradients (more generally, any last-minute comments or
pleas for bug fixes/minor features should be reported to us soon). We
don't currently consider any of the issues at
https://github.com/lme4/lme4/issues?state=open release-critical, except
https://github.com/lme4/lme4/issues/120, which should be closed as soon
as we can convince ourselves there aren't too many false positives.
On 14-02-07 06:26 PM, Ulf Köther wrote:
> @ Jake: I am not in any sense a statistician nor programmer and hence
> no reference, but I can just support this observation. I often have
> noisy data which in 90% of the cases cannot be fit using NM
> (non-convergence although with many iterations) but which is
> consistently dealt with by using bobyqa (with an equal amount of
> iterations)... And I do not have the impression that its results seem
> to be inaccurate, but that NM gets stuck in many situations.
>
> Ulf
>
> Am 07.02.2014 23:09, schrieb Jake Westfall:
>> Not sure if this thread is the time/place for me to bring this up,
>> but here goes... I *routinely* find that the new Nelder-Mead
>> optimizer in lme4 >= 1.0 provides worse solutions than the old
>> bobyqa optimizer -- "worse" in the sense that, comparing the same
>> model fitted to the same dataset using NM vs. bobyqa, the
>> coefficients are noticeably different and deviance for the former
>> model is noticeably higher. When I switch to bobyqa I pretty much
>> reproduce the results of my models fitted under lme4 < 1.0... and
>> bobyqa is faster too! At this point, I've gotten to where I just
>> always instruct lme4 to use bobyqa and don't even check anymore to
>> see what Nelder-Mead comes up with. One very important thing to
>> mention here is that the overwhelming majority of models that I fit
>> involve crossed random effects. So maybe the new Nelder-Mead
>> optimizer fairly consistently outperforms bobyqa for nested random
>> effects models, and this is the motivation for making it the new
>> lme4 default, but i! n my experience, for the kind of models that I
>> fit, bobyqa pretty much always does better.
>>
>> Jake
>>
>>> Date: Fri, 7 Feb 2014 15:51:44 -0500 From: bbolker at gmail.com To:
>>> r-sig-mixed-models at r-project.org Subject: Re: [R-sig-ME] Slight
>>> differences in fitted coefficients in lme4_1.0-6 compared to
>>> lme4_0.999999-2
>>>
>>> On 14-02-07 03:34 PM, Tom Wenseleers wrote:
>>>> Dear all, I noticed that I get very slight differences in my
>>>> current lme4 1.0-6 models compared to the old ones I obtained
>>>> earlier using lme4_0.999999-2. I was just wondering whether it
>>>> would somehow still be possible to reproduce the output of the
>>>> old lme4_0.999999-2, by setting appropriate options of the
>>>> optimizer to use etc? Or is this not possible? I also tried
>>>> installing the old lme4 version using install_url in package
>>>> devel, but if I try this I get a complaint that the old version
>>>> doesn't work with R.0.2. Any easy way to go back to the old
>>>> version (I need this to be able to fully reproduce published
>>>> results)?
>>>>
>>> I think you should be able to install lme4.0 from
>>> http://lme4.r-forge.r-project.org/repos/ to reproduce previous
>>> outputs. You *might* be able to reproduce previous results by
>>> setting
>>> control=lmerControl(optimizer="optimx",optCtrl=list(method="nlminb")),
>>>
>>>
but I don't think we could guarantee that -- too much of the internal
>>> machinery has changed too radically.
>>>
>>> Ben Bolker
>>>
>>>
>>>
>>>> Cheers, Tom
>>>>
>>>>
>>>>
>>>> [[alternative HTML version deleted]]
>>>>
>>>> _______________________________________________
>>>> R-sig-mixed-models at r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>
>>> _______________________________________________
>>> R-sig-mixed-models at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>> [[alternative HTML version deleted]]
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models .
>>
>
More information about the R-sig-mixed-models
mailing list