[R-sig-ME] nAGQ
Ben Bolker
bbo|ker @end|ng |rom gm@||@com
Sun Jul 7 23:25:59 CEST 2024
John, try your examples in GLMMadaptive, which has an independent
implementation of AGQ
On Sun, Jul 7, 2024, 4:27 PM Dimitris Rizopoulos <d.rizopoulos using erasmusmc.nl>
wrote:
> As the number of measurements per group increases, the conditional distribution
> of the random effects given the observed data (i.e., the posterior of the
> random effects) converges to a normal distribution, even if the marginal
> distribution of the random effects (prior) is not normal. See some
> arguments regarding this here for the related class of shared parameter
> models: https://doi.org/10.1093/biomet/asm087
>
>
>
> ——
> Dimitris Rizopoulos
> Professor of Biostatistics
> Erasmus University Medical Center
> The Netherlands
> ------------------------------
> *From:* R-sig-mixed-models <r-sig-mixed-models-bounces using r-project.org> on
> behalf of John Poe <jdpoe223 using gmail.com>
> *Sent:* Sunday, July 7, 2024 10:21:54 PM
> *To:* Ben Bolker <bbolker using gmail.com>
> *Cc:* R SIG Mixed Models <r-sig-mixed-models using r-project.org>
> *Subject:* Re: [R-sig-ME] nAGQ
>
>
>
> Waarschuwing: Deze e-mail is afkomstig van buiten de organisatie. Klik
> niet op links en open geen bijlagen, tenzij u de afzender herkent en weet
> dat de inhoud veilig is.
> Caution: This email originated from outside of the organization. Do not
> click links or open attachments unless you recognize the sender and know
> the content is safe.
>
>
>
> Yes it's using glmer and not lmer. It's comparing Laplace, AGQ= 7, 11, 51,
> and 101 quadrature points compared to the true distribution. Laplace and
> the lower values of agq should perform poorly because they are banking on
> normality. Higher levels of agq should be more accurate
>
> On Sun, Jul 7, 2024, 2:58 PM Ben Bolker <bbolker using gmail.com> wrote:
>
> > In lme4 the agq stuff is only for GLMMs, ie for glmer not lmer. I'm not
> > sure of the theory in your case ...
> >
> > On Sun, Jul 7, 2024, 3:50 PM John Poe <jdpoe223 using gmail.com> wrote:
> >
> >> Sure,
> >>
> >> I wrote several different random effects distributions based mostly on
> >> mixtures of normals. The main idea was that I was trying to break
> anything
> >> that would assume normality of the random effects when trying to
> >> approximate them.
> >>
> >> One of the worst cases I could come up with was a random effect
> >> distribution that had two modes surrounding the mean, one mode was for a
> >> normal distribution and one was for a weibull with a long tail. So both
> >> asymmetrical and multimodal.
> >>
> >> All of the simulations had 5000 groups with 500 observations per group
> >> and a binary outcome. I wanted to avoid shrinkage problems or
> distortions
> >> from too few groups.
> >>
> >> I used lme4 to fit the models and extract random effects estimates.
> >>
> >>
> >> On Sun, Jul 7, 2024, 2:29 PM Ben Bolker <bbolker using gmail.com> wrote:
> >>
> >>> Can you give a few more details of your simulations? E.g. response
> >>> distribution, mean of the response, cluster size?
> >>>
> >>> On Sat, Jul 6, 2024, 9:52 PM John Poe <jdpoe223 using gmail.com> wrote:
> >>>
> >>>> Hello all,
> >>>>
> >>>> I'm getting ready to teach multilevel modeling and am putting together
> >>>> some
> >>>> simulations to show relative accuracy of PIRLS, Laplace, and various
> >>>> numbers of quadrature points in lme4 when true random effects
> >>>> distributions
> >>>> aren't normal. Every bit of intuition I have says that nAGQ=100 should
> >>>> do
> >>>> better than nAGQ=11 which should be better than Laplace. Every stats
> >>>> article I've ever read on the subject also agrees with that intuition.
> >>>> There was some debate over if it actually matters that some solutions
> >>>> are
> >>>> more accurate but no debate that they are or are not actually more
> >>>> accurate. But that's not what's showing up.
> >>>>
> >>>> When I fit the models and predict Empirical Bayes means I look at
> >>>> histograms and they look as close to identical as possible. When I use
> >>>> KL
> >>>> Divergence and Gateaux derivatives to test for differences in the
> >>>> distributions both show very low scores meaning the distributions are
> >>>> very
> >>>> very similar.
> >>>>
> >>>> Furthermore, when I tried a multimodal distribution they all did a bad
> >>>> job
> >>>> of approximation of the true random effect. The exact same bad job.
> >>>>
> >>>> I feel like I'm taking crazy pills. The only thing I can think that
> >>>> makes
> >>>> any sense is lme4 is overriding my choices for approximation of the
> >>>> random
> >>>> effects in the models themselves or the calculation of the EB means is
> >>>> being done the same way regardless of the model.
> >>>>
> >>>> Any ideas?
> >>>>
> >>>> [[alternative HTML version deleted]]
> >>>>
> >>>> _______________________________________________
> >>>> R-sig-mixed-models using r-project.org mailing list
> >>>>
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-mixed-models&data=05%7C02%7Cd.rizopoulos%40erasmusmc.nl%7C944400495b934967798d08dc9ec287e8%7C526638ba6af34b0fa532a1a511f4ac80%7C0%7C0%7C638559805545437911%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=4Sy9PJ1rfelxECCIj2e5%2BohcBR%2BPs8Y4%2FWZaWo%2FRIlo%3D&reserved=0
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models>
> >>>>
> >>>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
>
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-mixed-models&data=05%7C02%7Cd.rizopoulos%40erasmusmc.nl%7C944400495b934967798d08dc9ec287e8%7C526638ba6af34b0fa532a1a511f4ac80%7C0%7C0%7C638559805545447242%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=Yc6LVvQgHO%2FnlQxVBs2fT1VcTVU%2Bo3nuDo6EZHep9GU%3D&reserved=0
> <https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models>
>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list