[R-sig-ME] Wrong convergence warnings with glmer in lme4 version 1.1-6??

Ben Bolker bbolker at gmail.com
Thu Mar 27 18:17:48 CET 2014


Ben Bolker <bbolker at ...> writes:

PS ...

> 
>   Looks sensible.
> 
>   I don't have more time to devote this right now, but my next steps
> would/will be:
> 
>  * restart the optimization from the fitted optimum (possibly? using
> refit(), or update(m1,start=getME(m1,c("theta","beta")))) and see if
> that makes the max grad smaller

  Steve Walker tried this, and it does 'work' -- successive refits
don't change the answer much, and the final gradient gets
progressively smaller (although it takes two refits to get below the
default test tolerance).  (The code above doesn't work as written
because the fixed effect component of the starting parameter list has
to be named 'fixef' -- we should change this to allow 'beta' as well
...)

>  * try a range of optimizers, especially nlminb, as described here:
> http://stackoverflow.com/questions/21344555/
  convergence-error-for-development-version-of-lme4/21370041#21370041
[URL broken to make gmane happy]
> 

  I tried this with the full range of optimizers listed at that link.
*Only* the built-in Nelder-Mead has the problem; all other optimizers
(optimx + nlminb or L-BFGS-B; nloptr + Nelder-Mead or BOBYQA; built-in
bobyqa) get max(abs(gradient)) considerably less than the tolerance --
but the actual fitted model changes very little (the log-likelihood increases
by <0.01).

  So this does seem to be a false positive.  Still doesn't explain
why this is happening with Nelder-Mead, or under what circumstances
it's likely to happen (although big models do look prone to it).  We
should probably switch away from Nelder-Mead as the default throughout
(this was already done for lmer models in the last release, but not
for glmer), although I would love to do some more testing before jumping
out of the frying pan ...


>   Ben Bolker



More information about the R-sig-mixed-models mailing list