[R-sig-ME] mhglm as a substitute for glmer in case of quasi-complete separation

Ben Bolker bbolker at gmail.com
Fri Sep 23 13:11:57 CEST 2016



On 16-09-23 01:03 AM, Tibor Kiss wrote:
> Hello,
> 
> I have a question emerging from a case of quasi-complete separation
> (as I understand it): I am working on a random-intercept GLMM, where
> one of my predictors has eight levels, one of which seems to lead to
> quasi-complete separation, as the dependent variable has a (0/260)
> distribution for this level. In any case, the standard error for this
> level is about 20 times as high as its coefficient, and consequently,
> the the Pr(z) is greater 0.95.
> 
> I understand that Firth's penalized likelihood method is the method
> of choice, and hence used mhglm (from mbest), which allows for glmms
> with one random factor. The problem with the aforementioned level
> disappears but the coefficients are differ largely from the one
> provided by glmer. mhglm deals with the offending level, but also
> turns other factors that have always received Pr(z) < 0.05 with
> values above 0.05.
> 
> Here are my questions: Does anybody on this list have experience with
> mbest and mhglm in particular, or is there another alternative for
> mixed models? Is there another way to tweak glmer so that Firth's
> logistic regression can be included into glmer?
> 
> Thanks
> 
> T.
> 
> 
>  Prof. Dr. Tibor Kiss, Sprachwissenschaftliches Institut 
> Ruhr-Universit�t Bochum D-44780 Bochum Office: +49-234-322-5114
> 

  Other solutions I have tried in the past:

  - Imposing a weakly informative prior on the fixed effects via either
MCMCglmm or blmer
  - Doing likelihood ratio tests (i.e.
anova(full_model,reduced_model_without_focal_term), which unlike the
Wald Z/p values in summary(), aren't strongly affected by complete
separation

http://bbolker.github.io/mixedmodels-misc/ecostats_chap.html#digression-complete-separation

 has an example of the former.



More information about the R-sig-mixed-models mailing list