[R-sig-ME] Statistical significance of random-effects (lme4 or others)

Daniel Lüdecke d@|uedecke @end|ng |rom uke@de
Mon Sep 7 08:13:16 CEST 2020


Hi Simon,
I'm not sure if this is a useful question. The variance can / should never
be negative, and it usually is always above 0 if you have some variation in
your outcome depending on the group factors (random effects).

Packages I know that do some "significance testing" or uncertainty
estimation of random effects are lmerTest::ranova() (quite well documented
what it does) or "arm::se.ranef()" resp. "parameters::standard_error(effects
= "random")". The two latter packages compute standard errors for the
conditional modes of the random effects (what you get with "ranef()").

Best
Daniel

-----Ursprüngliche Nachricht-----
Von: R-sig-mixed-models <r-sig-mixed-models-bounces using r-project.org> Im
Auftrag von Simon Harmel
Gesendet: Montag, 7. September 2020 06:28
An: Juho Kristian Ruohonen <juho.kristian.ruohonen using gmail.com>
Cc: r-sig-mixed-models <r-sig-mixed-models using r-project.org>
Betreff: Re: [R-sig-ME] Statistical significance of random-effects (lme4 or
others)

Dear J,

My goal is not to do any comparison between any models. Rather, for each
model I want to know if the variance component is different from 0 or not.
And what is a p-value for that.

On Sun, Sep 6, 2020 at 11:21 PM Juho Kristian Ruohonen <
juho.kristian.ruohonen using gmail.com> wrote:

> A non-statistician's two cents:
>
>    1. I'm not sure likelihood-ratio tests (LRTs) are valid at all for
>    models fit using REML (rather than MLE). The anova() function seems to
>    agree, given that its present version (4.0.2) refits the models using
MLE
>    in order to compare their deviances.
>    2. Even when the models have been fit using MLE, likelihood-ratio
>    tests for variance components are only applicable in cases of a single
>    variance component. In your case, this means a LRT can only be used for
*m1
>    vs ols1* and *m2 vs ols2*. There, you simply divide the p-value
>    reported by *anova(m1, ols1) *and *anova(m2, ols2)* by two. Both are
>    obviously extremely statistically significant. However, models *m3 *and
>    *m4* both have two random effects. The last time I checked, the
>    default assumption of a chi-squared deviance is no longer applicable in
>    such cases, so the p-values reported by Stata and SPSS are only
approximate
>    and tend to be too conservative. Perhaps you might apply an information
>    criterion instead, such as the AIC
>
<https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#can-i-use-aic-for-m
ixed-models-how-do-i-count-the-number-of-degrees-of-freedom-for-a-random-eff
ect>
>    .
>
> Best,
>
> J
>


--

_____________________________________________________________________

Universitätsklinikum Hamburg-Eppendorf; Körperschaft des öffentlichen Rechts; Gerichtsstand: Hamburg | www.uke.de
Vorstandsmitglieder: Prof. Dr. Burkhard Göke (Vorsitzender), Joachim Prölß, Prof. Dr. Blanche Schwappach-Pignataro, Marya Verdel
_____________________________________________________________________

SAVE PAPER - THINK BEFORE PRINTING



More information about the R-sig-mixed-models mailing list