[R-sig-ME] Follow-up: PBmodcomp: pwrssUpdate does not converge with glmer
lorenz.gygax at agroscope.admin.ch
lorenz.gygax at agroscope.admin.ch
Thu Jun 11 06:58:54 CEST 2015
Dear all,
Ben Bolker was kind enough to point out that in the most current (development) version of lme4 they had done some work on predict, simulate, and refit which may help my problem. Indeed, with this version, the error mentioned in the title of this e-Mail does not occur any more (at least as far as we have tested) which is very helpful.
Still, specifically in the bootstraps for calculating p-values in PBmodcomp, quite a few models will not fully converge (in the order of 10%). Is there a recommendation on the maximum proportion of such models that should not be surpassed? Or are there any other ideas what could be done about such a case? Use one of the other approaches that are implemented in pbkrtest? So far it does not seem to be a huge issue because the different estimates of the p-values are quite close to each other really (even the LRT).
Thanks again and regards, Lorenz
-----Ursprüngliche Nachricht-----
Von: Ben Bolker [mailto:bbolker at gmail.com]
Gesendet: Mittwoch, 10. Juni 2015 13:52
An: Gygax Lorenz Agroscope
Cc: r-sig-mixed-models at r-project.org
Betreff: Re: [R-sig-ME] PBmodcomp: pwrssUpdate does not converge with glmer
I'm not 100% sure, but a lot of this (at least the lme4 end, not
necessarily the pbkrtest end) sounds like the now-resolved (in the
development, soon-to-be-released version 1.1-8) issue
https://github.com/lme4/lme4/issues/231 .
On Wed, Jun 10, 2015 at 6:35 AM, <lorenz.gygax at agroscope.admin.ch> wrote:
> Dear all,
>
> In a current project (2 x 2 x 2 factorial design) we are interested in
> calculating p-values for binary outcomes (we are aware that such an approach
> is not unequivocal but circumstances are such that results will most easily
> be communicated when we can conduct step-wise backwards model selection and
> when we have the p-values).
>
> The experimental design was hierarchically nested (350 observations
> conducted in 178 phases of the experiment nested in 90 animal-IDs nested in
> 24 facilities). The three factors can and should be assigned to three
> different hierarchical levels (error, phase, facility).
>
> Even though the data does not look in any way extreme (zeros and ones occur
> in all 8 factor combinations), there are some convergence issues with
> running the models. These can be mostly dealt with by using: glmerControl
> (optimizer= 'bobyqa', optCtrl= list (maxfun= 5000)).
>
> Due to sample size and the assignment of the fixed effects to the different
> hierarchical levels, we would like to use parametric bootstrap for
> calculating the p-values as implemented in package pbkrtest (very nice!).
>
> Obviously throughout calculating the bootstrap, some of the models will not
> converge. As far as I can see, the bootstrap sample is simply accordingly
> reduced. Some models, unfortunately, do not result in a warning but in an
> error: "pwrssUpdate did not converge in (maxit) iterations with PBmodcomp".
>
> These errors cause PBmodcomp to fail. Does anyone know whether there is a
> reason why PBmodcomp reacts differently to warnings and erros in the
> bootstrapped glmer's? Or has this just historically grown that the warnings
> are captured but the errors are not? If the latter could catching errors be
> easily incorporated as well? Where would that need to be done? I cannot find
> the according code neither in pbkrtest::PBmodcomp.merMod nor in
> pbkrtest::PBrefdist.merMod.
>
> Many thanks for your ideas and best regards, Lorenz
> -
> Lorenz Gygax, PD Dr. sc. nat.
> Federal Food Safety and Veterinary Office FFSVO
> Centre for Proper Housing of Ruminants and Pigs
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
More information about the R-sig-mixed-models
mailing list