[R-sig-ME] mer_optimize

Ben Bolker bbolker at gmail.com
Sat Dec 1 23:00:30 CET 2012


Ivar Herfindal <ivar.herfindal at ...> writes:

> Some time ago, I raised a question on this mailing list about using
> .Call("mer_optimize", mymodel, PACKAGE="lme4") for a model that
> failed to converge, see:
> https://stat.ethz.ch/pipermail/r-sig-mixed-models/2008q3/001354.html
 
> As a "horrible hack" solution to my problem, I added a few lines
> with .Call("mer_optimize", mymodel, PACKAGE="lme4") in my script,
> and was satisfied that the model seemed to converge after a few of
> these runs. Then I forgot it all, until I recently ran the same
> script but on a different model.
 
> This new model did not have problem with convergence (at least not
> any problem that are reported), it is a simple linear mixed model
> with one random factor. I was therefore surprised that the
> log-likelihood and parameter estimates change after I run
> .Call("mer_optimize"...) on the model. Since I use log-likelihoods
> to compare models (likelihood-ratio tests or AICc), it is quite
> important to have the most "correct" log-likelihood. The change is
> quite big (change of 2 - 3 in AICc) and differs between candidate
> models, such that it can have impact on the ranking of models. Also,
> the parameter estimates and their SE change, which affect the
> corresponding t-values. It seems that more complex models have the
> largest change after calling mer_optimize.
 
> My question is then: Should I continue to use
> .Call("mer_optimize"...) on my models until log-Likelihood not
> change for a model, or can I trust the first run of lmer as long as
> I do not get any warnings about convergence?
 
> I attach below example for a model summary before and after running
> mer_optimize. Unfortunately, I have not been able to make a small,
> reproducible example.
 
> Best regards
> 
> Ivar
> 
# ##########################################################################
# > mod <- lmer(LRS ~ Apr*ALR + I(ALR^2) + (1|Lok), data=LRStab, REML=FALSE)
# > mod
# > mod
# Linear mixed model fit by maximum likelihood
# Formula: LRS ~ Apr * ALR + I(ALR^2) + (1 | Lok)
#    Data: LRStab
#   AIC  BIC logLik deviance REMLdev
#  1724 1751   -855     1710    1730
# Random effects:
#  Groups     Name        Variance   Std.Dev.
#  Lok        (Intercept) 4.1578e-09 6.4481e-05
#  Residual               6.6801e+00 2.5846e+00
# Number of obs: 361, groups: Lok, 99
# 
# Fixed effects:
#              Estimate Std. Error t value
# (Intercept) -1.611300   1.359076  -1.186
# Apr          0.075193   0.205390   0.366
# ALR          2.677924   0.365207   7.333
# I(ALR^2)     0.004942   0.013571   0.364
# Apr:ALR     -0.118061   0.049655  -2.378
# 
# > .Call("mer_optimize", mod, PACKAGE="lme4")
# NULL
# > mod
# Linear mixed model fit by maximum likelihood
# Formula: LRS ~ Apr * ALR + I(ALR^2) + (1 | Lok)
#    Data: LRStab
#   AIC  BIC logLik deviance REMLdev
#  1723 1750 -854.3     1709    1728
# Random effects:
#  Groups     Name        Variance Std.Dev.
#  Lok        (Intercept) 0.33923  0.58243
#  Residual               6.34722  2.51937
# Number of obs: 361, groups: Lok, 99
# 
# Fixed effects:
#              Estimate Std. Error t value
# (Intercept) -1.620101   1.355985  -1.195
# Apr          0.074193   0.204893   0.362
# ALR          2.714618   0.364657   7.444
# I(ALR^2)     0.004308   0.013566   0.318
# Apr:ALR     -0.121868   0.049614  -2.456


  Ivar, 

  I would say it is reasonable to be concerned, but I would
really only worry about this if you are specifically interested
in the variance decomposition between among-group and residual
error, because as you can see the fixed effects and their
standard errors only change by a tiny amount.  This is probably
a situation where the likelihood surface is very flat with
respect to the variance.

  If you feel up to a minor challenge, I might recommend that
you try the development version of lme4 (available from r-forge,
or the very most recent version can be installed from github
via the devtools package); it has more flexible optimization
choices, and more tools for computing and plotting likelihood
profiles and comparing likelihoods (deviances) for different
parameter values.  There are some stability issues with GLMMs
that we're still trying to fix, but as far as I know it
works as well or better on LMMs than the stable version.

  Ben Bolker



More information about the R-sig-mixed-models mailing list