[R-sig-ME] convergence issues on lme4 and incoherent error messages

Ben Bolker bbo|ker @end|ng |rom gm@||@com
Thu Jun 13 06:31:35 CEST 2019


  Details below

On Wed, Jun 12, 2019 at 12:38 AM Cristiano Alessandro
<cri.alessandro using gmail.com> wrote:
>
> Hi all,
>
> I am having trouble fitting a mixed effect model. I keep getting the
> following warning, independently on the optimizer that I use (I tried
> almost all of them):
>
> Warning messages:
> 1: 'rBind' is deprecated.
>  Since R version 3.2.0, base's rbind() should work fine with S4 objects

  This warning is harmless; it most likely comes from an outdated
version of lme4 (we fixed it in the devel branch 15 months ago:
https://github.com/lme4/lme4/commit/9d5d433d40408222b290d2780ab6e9e4cec553b9)

> 2: In optimx.check(par, optcfg$ufn, optcfg$ugr, optcfg$uhess, lower,  :
>   Parameters or bounds appear to have different scalings.
>   This can cause poor performance in optimization.
>   It is important for derivative free methods like BOBYQA, UOBYQA, NEWUOA.

   Have you tried scaling & centering the predictor variables?

> 3: Model failed to converge with 5 negative eigenvalues: -2.5e-01 -5.8e-01
> -8.2e+01 -9.5e+02 -1.8e+03
>
> This suggests that the optimization did not converge. On the other hand, if
> I call summary() of the "fitted" model, I receive (among the other things)
> a convergence code = 0, which according to the documentation means that the
> optimization has indeed converged. Did the optimization converged or not?
>
> convergence code: 0

   These do look large/worrying, but could be the result of bad
scaling (see above).  There are two levels of checking for convergence
in lme4: one at the level of the nonlinear optimizer itself (L-BFGS-B,
which gives a convergence code of zero) and a secondary attempt to
estimate the Hessian and scaled gradient at the reported optimum
(which is giving you the "model failed to converge" warning).
?convergence gives much more detail on this subject ...

> Parameters or bounds appear to have different scalings.
>   This can cause poor performance in optimization.
>   It is important for derivative free methods like BOBYQA, UOBYQA, NEWUOA.
>
> Note that I used 'optimx' ("L-BFGS-B") for this specific run of the
> optimization

  I would *not* generally recommend this.  We don't have
analytically/symbolically computed gradients for the mixed-model
likelihood, so derivative-based optimizers like L-BFGS-B will be using
finite differencing to estimate the gradients, which is generally slow
and numerically imprecise.  That's why the default choices are
derivative-free optimizers (BOBYQA, Nelder-Mead etc.).

  I see there's much more discussion at the SO question, I may or may
not have time to check that out.

. I also get other weird stuff that I do not understand:
> negative entries in the var-cov matrix, which I could not get rid of even
> if I simplify the model a lot (see
> https://stats.stackexchange.com/questions/408504/variance-covariance-matrix-with-negative-entries-on-mixed-model-fit
> , with data). I thought of further simplify the var-cov matrix making it
> diagonal, but I am still struggling on how to do that in lme4 (see
> https://stats.stackexchange.com/questions/412345/diagonal-var-cov-matrix-for-random-slope-in-lme4
> ).
>
> Any help is highly appreciated. Thanks!
>
> Cristiano
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models



More information about the R-sig-mixed-models mailing list