[R-sig-ME] Convergence in glmmTMB but not glmer
Ben Bolker
bbo|ker @end|ng |rom gm@||@com
Tue Oct 20 20:14:49 CEST 2020
On 10/20/20 2:02 PM, Thierry Onkelinx wrote:
> Daniel sent me the data in private.
>
> A couple of remarks on the dataset.
> - the response is non-integer. You'll need to convert it to integer
> (total number) and use an appropriate offset term (log(nights)).
> - make sure the factor covariate is a factor and not an integer.
If the response is non-integer, that makes my comment about
overdispersion not necessarily relevant (check again after re-fitting).
It's often a good idea when using an offset such as log(nights) to
*also* (alternatively) try using log(nights) as a predictor: using
log(nights) assumes that the number of counts is strictly proportional
to the number of nights measured (log(counts) ~ log(nights) + <stuff> ->
counts ~ nights*exp(stuff) , whereas using log(counts) allows for some
saturation effects (log(counts) ~ alpha*log(nights) + <stuff> -> counts
~ nights^alpha*exp(stuff))
>
> Please see if that solves the problem. What happens if you use a nbinom
> distribution as Ben suggested?
>
> Personally, I don't like to "standardise" covariates. It makes them much
> harder to interpret. I prefer to center to a more meaningful value than
> the mean. And rescale it by changing the unit. E.g. Age ranges from 1 to
> 15 with mean 6.76. I'd use something like AgeC = (Age - 5) / 10. This
> gives a similar range as the standardisation of Age. But one unit of
> AgeC represents 10 year. And the intercept refers to Age = 5. Making the
> parameters estimates easier to interpret IMHO.
Yes, although 'strict' standardization (scaling by predictor SD or
2*predictor SD) allows direct interpretation of the parameters as a kind
of effect size (Schielzeth 2010), whereas 'human-friendly'
standardization trades interpretability for the comparison of magnitudes
being only an approximation.
>
> Best regards,
>
> ir. Thierry Onkelinx
> Statisticus / Statistician
>
> Vlaamse Overheid / Government of Flanders
> INSTITUUT VOOR NATUUR- EN BOSONDERZOEK / RESEARCH INSTITUTE FOR NATURE
> AND FOREST
> Team Biometrie & Kwaliteitszorg / Team Biometrics & Quality Assurance
> thierry.onkelinx using inbo.be <mailto:thierry.onkelinx using inbo.be>
> Havenlaan 88 bus 73, 1000 Brussel
> www.inbo.be <http://www.inbo.be>
>
> ///////////////////////////////////////////////////////////////////////////////////////////
> To call in the statistician after the experiment is done may be no more
> than asking him to perform a post-mortem examination: he may be able to
> say what the experiment died of. ~ Sir Ronald Aylmer Fisher
> The plural of anecdote is not data. ~ Roger Brinner
> The combination of some data and an aching desire for an answer does not
> ensure that a reasonable answer can be extracted from a given body of
> data. ~ John Tukey
> ///////////////////////////////////////////////////////////////////////////////////////////
>
> <https://www.inbo.be>
>
>
> Op di 20 okt. 2020 om 19:40 schreef Ben Bolker <bbolker using gmail.com
> <mailto:bbolker using gmail.com>>:
>
> As Thierry says, the data would allow us to give a more detailed
> answer. However:
>
> * the overall goodness-of-fit is very similar (differences of
> ~0.001
> or less on the deviance scale)
>
> * the random-effects std deve estimate is similar (2% difference)
> * the parameter estimates are quite similar
> * the standard errors of the coefficients look reasonable for
> glmmTMB
> and bogus for lme4 (in any case, if there's a disagreement I would be
> more suspicious of the platform that gave convergence warnings)
>
> There's also strong evidence of dispersion (deviance/resid df > 6);
> you should definitely do something to account for that (check for
> nonlinearity in residuals, switch to negative binomial, add an
> observation-level random effect ...)
>
> You might try the usual set of remedies for convergence problems
> (see ?troubleshooting, ?convergence in lme4), e.g. ?allFit. Or try
> re-running the lme4 model with starting values set to the glmmTMB
> estimates.
>
> Overall, though, I would trust the glmmTMB results.
>
> On 10/20/20 12:56 PM, Daniel Wright wrote:
> > Hello,
> >
> > I'm having convergence issues when using glmer in lme4, but not
> glmmTMB.
> > I'm running a series of generalized linear mixed effect models
> with poisson
> > distribution for ecological count data. I've included a random
> effect of
> > site (n = 26) in each model. All non-factor covariates are
> standardized.
> >
> > The coefficient estimates of models run in glmer and glmmTMB are very
> > similar, but models run in glmer are having convergence issues.
> Any advice
> > would be appreciated, as I'm not sure if I can rely on my results
> from
> > glmmTMB.
> >
> > Attached are example of outputs from glmmTMB vs glmer:
> >
> >
> > _______________________________________________
> > R-sig-mixed-models using r-project.org
> <mailto:R-sig-mixed-models using r-project.org> mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> >
>
> _______________________________________________
> R-sig-mixed-models using r-project.org
> <mailto:R-sig-mixed-models using r-project.org> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list