[R-sig-ME] Theoric rapid doubts about glmer()

D. Rizopoulos d@r|zopou|o@ @end|ng |rom er@@mu@mc@n|
Wed Apr 24 17:51:24 CEST 2019


From: David Bars <dbarscortina using gmail.com<mailto:dbarscortina using gmail.com>>
Date: Wednesday, 24 Apr 2019, 17:46
To: r-sig-mixed-models using r-project.org <r-sig-mixed-models using r-project.org<mailto:r-sig-mixed-models using r-project.org>>
Subject: [R-sig-ME] Theoric rapid doubts about glmer()

Dear community,

My previous e-mail with links has not been slipped through the cracks. For
this reason, this second time, only I send two teoric doubts if someone
could help me to understand two simple doubts but for me (as PhD student
with a curiosity in statistics) I've not capable to solve by myself yet.

1- As general rule, in glmer models if we have only one random effect,
maybe it's more recommended always to perform a Gauss-Hermite Quadrature
approximation instead of Laplace approximation because we can perform more
than one iteration?

In generalized linear mixed models, fitted by glmer(), the likelihood function involves an integral over the random effects that cannot be solved analyticaly. Hence, it is required to numerically approximate it. A standard method to do this is the adaptive Gaussian quadrature. The more points you use the better the more accurate the approximation. The Laplace approximation is equivalent to adaptive Gaussian quadrature with one point, and it often does not work that optimally especially for binary data.

 Currently glmer() allows for adaptive Gaussian quadrature for scalar random effects. If you want to include something more than random intercepts and use the adaptive Gaussian quadrature you can do it with the GLMMadaptive package
https://drizopoulos.github.io/GLMMadaptive/



2 - I've read some posts addressing why the variance of Random effect
differs between lmer and glmer... ([(
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstats.stackexchange.com%2Fquestions%2F115090%2Fwhy-do-i-get-zero-variance-of-a-random-effect-in-my-mixed-model-despite-some-va&data=02%7C01%7Cd.rizopoulos%40erasmusmc.nl%7C1a89a3d603e34e816eb908d6c8c3936a%7C526638ba6af34b0fa532a1a511f4ac80%7C0%7C0%7C636917139659987189&sdata=A1uq0pbAvXc2VJ514VSyxW0V0A%2FkU5R0aqoKR08yr3Q%3D&reserved=0)]).
..

Due to non-normality of my data (not attached), I need to use glmer, but
how can I explain that the variance of my random variable (Horse) is
practically 0???
Performing an analogous analysis by lmer (assuming badly "normality") the
variance of my random variable (Horse) increased up to 33%!!! I think that
horse, must be an important value of explaining the variance of my model
(as states lmer model).

Therefore, I perform glmer and I obtained a variance for Horse as random
effect of 0, meanwhile performing a lmer() I obtained a variance for Horse
as random effect of 33%. How can I assess the importance of the random
effect on my model? How can I interpret well the model?

Thanks on advance for your help,

David Bars
PhD Student
University of Lleida // INRA Jouy-en-Josas

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-mixed-models using r-project.org mailing list
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-mixed-models&data=02%7C01%7Cd.rizopoulos%40erasmusmc.nl%7C1a89a3d603e34e816eb908d6c8c3936a%7C526638ba6af34b0fa532a1a511f4ac80%7C0%7C0%7C636917139659987189&sdata=%2B9kQX7M6GTJZpz53om9dEL%2Bnhv4BpYpt43TL%2FPoFidg%3D&reserved=0

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list