[R-sig-ME] Testing Significance of Random Effects
|uc@@|oppo|| @end|ng |rom gm@||@com
Sat Mar 13 23:56:34 CET 2021
As your interest seems to lie in assessing whether the fit of the model has improved following the addition of a given random effect, I would suggest to look at the AIC or deviance, both (relative) measures of fit of the model on the underlying data; the smaller the AIC/deviance, the better the model fits, so you can evaluate the impact of the additional random effect by studying how these indexes change.
I believe you can easily obtain these quantities with summary() or broom_mixed::glance() (amongst, I imagine, many other alternative methods).
You could also check the lmerTest package, although I have not personally played with it.
And here is another interesting source for similar needs of yours.
On a more quasi-philosophical issue, I would make a point for NOT removing covariates merely because of a “wrong” p-value: not only it’s a tricky subject in the setting of multi-level models (each level has a different sample size, which makes computing p-values tricky), but it is just too easy to “turn off the brain” and blindly follow an arbitrarily set threshold.
I do realize that this is going off-topic and we’re entering the realm of subjectivity though, so please consider this as a purely personal note.
> On 12 Mar 2021, at 16:06, Ben Bolker <bbolker using gmail.com> wrote:
> I wouldn't generally recommend removing random effects on the basis of null-hypothesis significance testing ... but others on this list might, e.g. Matuschek et al. https://arxiv.org/pdf/1511.01864.pdf (section 2.4) suggest backward stepwise removal with an alpha-level of 0.2.
> Also see https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#testing-significance-of-random-effects
>> On 3/12/21 9:43 AM, Marco Zanin wrote:
>> To whom it may concern,
>> My name is Marco Zanin, I am a PhD candidate in Sport & Exercise Physiology at Leeds Beckett University (Leeds, UK) and I work as Sport Scientist at Bath Rugby (Bath, UK).
>> I am relatively new to lme4, but I was wondering whether you might help me to find a way to test the significance of the random effects in a model, whether the random effects improve/reduce the fit of the model and are thus necessary or if they could be removed from the model.
>> I look forward to hearing from you.
>> Kind regards,
>> Marco Zanin
>> This email has been scanned by the Symantec Email Security.cloud service.
>> For more information please visit http://www.symanteccloud.com
>> [[alternative HTML version deleted]]
>> R-sig-mixed-models using r-project.org mailing list
> R-sig-mixed-models using r-project.org mailing list
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models