[R-sig-ME] nAGQ

John Poe jdpoe223 @end|ng |rom gm@||@com
Sun Jul 7 22:21:54 CEST 2024


Yes it's using glmer and not lmer. It's comparing Laplace, AGQ= 7, 11, 51,
and 101 quadrature points compared to the true distribution. Laplace and
the lower values of agq should perform poorly because they are banking on
normality. Higher levels of agq should be more accurate

On Sun, Jul 7, 2024, 2:58 PM Ben Bolker <bbolker using gmail.com> wrote:

> In lme4 the agq stuff is only for GLMMs, ie for glmer not lmer. I'm not
> sure of the theory in your case ...
>
> On Sun, Jul 7, 2024, 3:50 PM John Poe <jdpoe223 using gmail.com> wrote:
>
>> Sure,
>>
>> I wrote several different random effects distributions based mostly on
>> mixtures of normals. The main idea was that I was trying to break anything
>> that would assume normality of the random effects when trying to
>> approximate them.
>>
>> One of the worst cases I could come up with was a random effect
>> distribution that had two modes surrounding the mean, one mode was for a
>> normal distribution and one was for a weibull with a long tail. So both
>> asymmetrical and multimodal.
>>
>> All of the simulations had 5000 groups with 500 observations per group
>> and a binary outcome. I wanted to avoid shrinkage problems or distortions
>> from too few groups.
>>
>> I used lme4 to fit the models and extract random effects estimates.
>>
>>
>> On Sun, Jul 7, 2024, 2:29 PM Ben Bolker <bbolker using gmail.com> wrote:
>>
>>> Can you give a few more details of your simulations? E.g. response
>>> distribution, mean of the response, cluster size?
>>>
>>> On Sat, Jul 6, 2024, 9:52 PM John Poe <jdpoe223 using gmail.com> wrote:
>>>
>>>> Hello all,
>>>>
>>>> I'm getting ready to teach multilevel modeling and am putting together
>>>> some
>>>> simulations to show relative accuracy of PIRLS, Laplace, and various
>>>> numbers of quadrature points in lme4 when true random effects
>>>> distributions
>>>> aren't normal. Every bit of intuition I have says that nAGQ=100 should
>>>> do
>>>> better than nAGQ=11 which should be better than Laplace. Every stats
>>>> article I've ever read on the subject also agrees with that intuition.
>>>> There was some debate over if it actually matters that some solutions
>>>> are
>>>> more accurate but no debate that they are or are not actually more
>>>> accurate. But that's not what's showing up.
>>>>
>>>> When I fit the models and predict Empirical Bayes means I look at
>>>> histograms and they look as close to identical as possible. When I use
>>>> KL
>>>> Divergence and Gateaux derivatives to test for differences in the
>>>> distributions both show very low scores meaning the distributions are
>>>> very
>>>> very similar.
>>>>
>>>> Furthermore, when I tried a multimodal distribution they all did a bad
>>>> job
>>>> of approximation of the true random effect. The exact same bad job.
>>>>
>>>> I feel like I'm taking crazy pills. The only thing I can think that
>>>> makes
>>>> any sense is lme4 is overriding my choices for approximation of the
>>>> random
>>>> effects in the models themselves or the calculation of the EB means is
>>>> being done the same way regardless of the model.
>>>>
>>>> Any ideas?
>>>>
>>>>         [[alternative HTML version deleted]]
>>>>
>>>> _______________________________________________
>>>> R-sig-mixed-models using r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>
>>>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list