[R-sig-ME] convergence issues on lme4 and incoherent error messages
Cristiano Alessandro
cr|@@|e@@@ndro @end|ng |rom gm@||@com
Fri Jun 14 00:58:04 CEST 2019
This does not happen. However, there are combinations of predictor
variables that *for some subjects* have no data on the response, as
explained here:
https://stats.stackexchange.com/questions/412703/zeroed-random-effects-after-fitting-a-mixed-effect-model?noredirect=1#comment770806_412703
Cristiano
On Thu, Jun 13, 2019 at 5:30 PM Ben Bolker <bbolker using gmail.com> wrote:
>
> The only other think I can think of is that you may have a *complete
> separation* issue or near-complete separation, i.e. some combinations of
> predictor variables may have responses that are all-zero or all-one ...
>
> On 2019-06-13 1:02 p.m., Cristiano Alessandro wrote:
> > Thanks! I do have the call:
> >
> > dataframe$predictor1<-as.factor(dataframe$predictor1)
> > dataframe$predictor2<-as.factor(dataframe$predictor2)
> >
> > at the beginning of my code.
> > Cristiano
> >
> > On Thu, Jun 13, 2019 at 10:20 AM René <bimonosom using gmail.com
> > <mailto:bimonosom using gmail.com>> wrote:
> >
> > sorry... "then the model should [not] care about scaling "
> >
> > Am Do., 13. Juni 2019 um 17:19 Uhr schrieb René <bimonosom using gmail.com
> > <mailto:bimonosom using gmail.com>>:
> >
> > Hi, :)
> >
> > "But the predictor variables are all categorical with "sum"
> > coding; I am not sure what it means to center and scale a
> > categorical
> > variable."
> >
> > Sure means nothing :))
> > If your predictors are actually categorical, then the model
> > should care about scaling at all (and should not come up with
> > this).
> > The implies, that you might want to check, whether the
> > categorical predictors are actually factorized in the dataframe.
> > e.g.
> > dataframe$predictor1<-as.factor(dataframe$predictor1)
> > dataframe$predictor2<-as.factor(dataframe$predictor2)
> >
> > or (the same but less affected by how the variable is coded
> > beforehand... )
> >
> dataframe$predictor1<-as.factor(as.character(dataframe$predictor1))
> >
> >
> > Best, René
> >
> >
> > Am Do., 13. Juni 2019 um 16:27 Uhr schrieb Cristiano Alessandro
> > <cri.alessandro using gmail.com <mailto:cri.alessandro using gmail.com>>:
> >
> > Thanks a lot for your help!
> >
> > Regarding "centering and scaling". I am not familiar with
> > this; I will
> > check this out. But the predictor variables are all
> > categorical with "sum"
> > coding; I am not sure what it means to center and scale a
> > categorical
> > variable. Is there a theory behind this or a text I could
> > look at?
> >
> > Best
> > Cristiano
> >
> > On Wed, Jun 12, 2019 at 11:31 PM Ben Bolker
> > <bbolker using gmail.com <mailto:bbolker using gmail.com>> wrote:
> >
> > > Details below
> > >
> > > On Wed, Jun 12, 2019 at 12:38 AM Cristiano Alessandro
> > > <cri.alessandro using gmail.com
> > <mailto:cri.alessandro using gmail.com>> wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I am having trouble fitting a mixed effect model. I keep
> > getting the
> > > > following warning, independently on the optimizer that I
> > use (I tried
> > > > almost all of them):
> > > >
> > > > Warning messages:
> > > > 1: 'rBind' is deprecated.
> > > > Since R version 3.2.0, base's rbind() should work fine
> > with S4 objects
> > >
> > > This warning is harmless; it most likely comes from an
> > outdated
> > > version of lme4 (we fixed it in the devel branch 15 months
> > ago:
> > >
> > >
> >
> https://github.com/lme4/lme4/commit/9d5d433d40408222b290d2780ab6e9e4cec553b9
> > > )
> > >
> > > > 2: In optimx.check(par, optcfg$ufn, optcfg$ugr,
> > optcfg$uhess, lower, :
> > > > Parameters or bounds appear to have different scalings.
> > > > This can cause poor performance in optimization.
> > > > It is important for derivative free methods like
> > BOBYQA, UOBYQA,
> > > NEWUOA.
> > >
> > > Have you tried scaling & centering the predictor
> variables?
> > >
> > > > 3: Model failed to converge with 5 negative eigenvalues:
> > -2.5e-01
> > > -5.8e-01
> > > > -8.2e+01 -9.5e+02 -1.8e+03
> > > >
> > > > This suggests that the optimization did not converge. On
> > the other hand,
> > > if
> > > > I call summary() of the "fitted" model, I receive (among
> > the other
> > > things)
> > > > a convergence code = 0, which according to the
> > documentation means that
> > > the
> > > > optimization has indeed converged. Did the optimization
> > converged or not?
> > > >
> > > > convergence code: 0
> > >
> > > These do look large/worrying, but could be the result
> > of bad
> > > scaling (see above). There are two levels of checking for
> > convergence
> > > in lme4: one at the level of the nonlinear optimizer
> > itself (L-BFGS-B,
> > > which gives a convergence code of zero) and a secondary
> > attempt to
> > > estimate the Hessian and scaled gradient at the reported
> > optimum
> > > (which is giving you the "model failed to converge"
> warning).
> > > ?convergence gives much more detail on this subject ...
> > >
> > > > Parameters or bounds appear to have different scalings.
> > > > This can cause poor performance in optimization.
> > > > It is important for derivative free methods like
> > BOBYQA, UOBYQA,
> > > NEWUOA.
> > > >
> > > > Note that I used 'optimx' ("L-BFGS-B") for this specific
> > run of the
> > > > optimization
> > >
> > > I would *not* generally recommend this. We don't have
> > > analytically/symbolically computed gradients for the
> > mixed-model
> > > likelihood, so derivative-based optimizers like L-BFGS-B
> > will be using
> > > finite differencing to estimate the gradients, which is
> > generally slow
> > > and numerically imprecise. That's why the default choices
> are
> > > derivative-free optimizers (BOBYQA, Nelder-Mead etc.).
> > >
> > > I see there's much more discussion at the SO question, I
> > may or may
> > > not have time to check that out.
> > >
> > > . I also get other weird stuff that I do not understand:
> > > > negative entries in the var-cov matrix, which I could
> > not get rid of even
> > > > if I simplify the model a lot (see
> > > >
> > >
> >
> https://stats.stackexchange.com/questions/408504/variance-covariance-matrix-with-negative-entries-on-mixed-model-fit
> > > > , with data). I thought of further simplify the var-cov
> > matrix making it
> > > > diagonal, but I am still struggling on how to do that in
> > lme4 (see
> > > >
> > >
> >
> https://stats.stackexchange.com/questions/412345/diagonal-var-cov-matrix-for-random-slope-in-lme4
> > > > ).
> > > >
> > > > Any help is highly appreciated. Thanks!
> > > >
> > > > Cristiano
> > > >
> > > > [[alternative HTML version deleted]]
> > > >
> > > > _______________________________________________
> > > > R-sig-mixed-models using r-project.org
> > <mailto:R-sig-mixed-models using r-project.org> mailing list
> > > > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> > >
> >
> > [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-mixed-models using r-project.org
> > <mailto:R-sig-mixed-models using r-project.org> mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> >
>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list