[R-sig-ME] Statistical significance of random-effects (lme4 or others)
Simon Harmel
@|m@h@rme| @end|ng |rom gm@||@com
Mon Sep 7 06:28:19 CEST 2020
Dear J,
My goal is not to do any comparison between any models. Rather, for each
model I want to know if the variance component is different from 0 or not.
And what is a p-value for that.
On Sun, Sep 6, 2020 at 11:21 PM Juho Kristian Ruohonen <
juho.kristian.ruohonen using gmail.com> wrote:
> A non-statistician's two cents:
>
> 1. I'm not sure likelihood-ratio tests (LRTs) are valid at all for
> models fit using REML (rather than MLE). The anova() function seems to
> agree, given that its present version (4.0.2) refits the models using MLE
> in order to compare their deviances.
> 2. Even when the models have been fit using MLE, likelihood-ratio
> tests for variance components are only applicable in cases of a single
> variance component. In your case, this means a LRT can only be used for *m1
> vs ols1* and *m2 vs ols2*. There, you simply divide the p-value
> reported by *anova(m1, ols1) *and *anova(m2, ols2)* by two. Both are
> obviously extremely statistically significant. However, models *m3 *and
> *m4* both have two random effects. The last time I checked, the
> default assumption of a chi-squared deviance is no longer applicable in
> such cases, so the p-values reported by Stata and SPSS are only approximate
> and tend to be too conservative. Perhaps you might apply an information
> criterion instead, such as the AIC
> <https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#can-i-use-aic-for-mixed-models-how-do-i-count-the-number-of-degrees-of-freedom-for-a-random-effect>
> .
>
> Best,
>
> J
>
> ma 7. syysk. 2020 klo 6.48 Simon Harmel (sim.harmel using gmail.com) kirjoitti:
>
>> Dear Victor,
>>
>> I'm looking for the p-value for the "variance components", the variance
>> (or
>> sd) estimated for random-effects in the 4 models I showed?
>>
>> For example, for m1 I'm looking for the p-value for the terms shown below.
>>
>> Linear mixed model fit by REML ['lmerMod']
>> Formula: math ~ 1 + (1 | sch.id)
>> Data: hsb
>>
>> REML criterion at convergence: 47116.8
>>
>> Scaled residuals:
>> Min 1Q Median 3Q Max
>> -3.0631 -0.7539 0.0267 0.7606 2.7426
>>
>> Random effects: *************
>> Groups Name Variance Std.Dev.
>> sch.id (Intercept) 8.614 2.935 *****P-VALUE HERE?****
>> Residual 39.148 6.257 **** P_VALUE HERE? ****
>> Number of obs: 7185, groups: sch.id, 160
>>
>> On Sun, Sep 6, 2020 at 10:15 PM Simon Harmel <sim.harmel using gmail.com>
>> wrote:
>>
>> > Hi Victor,
>> >
>> > Thanks for your response. First, as far as I know "lsmeans" has now
>> become
>> > "emmeans".
>> >
>> > Second, all my data and code is 100% reproducible, would you please let
>> me
>> > know how can I possibly obtain the p-value for the random-effects'
>> variance
>> > components in any of the 4 models I showed in my original question?
>> >
>> > Thanks, Simon
>> >
>> > On Sun, Sep 6, 2020 at 9:42 PM Victor Oribamise <
>> > victor.oribamise using gmail.com> wrote:
>> >
>> >> Hey Simon,
>> >>
>> >> You can check the lsmeans package in R, you can obtain p values for
>> your
>> >> models using the package
>> >>
>> >> Victor
>> >>
>> >> On Sun, Sep 6, 2020 at 9:38 PM Simon Harmel <sim.harmel using gmail.com>
>> wrote:
>> >>
>> >>> Dear All,
>> >>>
>> >>>
>> >>>
>> >>> Most MLM packages (e.g., HLM, SPSS, SAS, STATA) provide a p-value for
>> the
>> >>>
>> >>> variance components.
>> >>>
>> >>>
>> >>>
>> >>> My understanding based on (
>> >>>
>> >>>
>> >>>
>> https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#testing-significance-of-random-effects
>> >>> )
>> >>>
>> >>> is that this is not possible to achieve in R, right?
>> >>>
>> >>>
>> >>>
>> >>> If not, for my 4 models below, I assume I need to compare, using
>> anova(),
>> >>>
>> >>> each model against its OLS equivalent to obtain a likelihood ratio
>> test
>> >>>
>> >>> p-value for each model's variance component, correct?
>> >>>
>> >>>
>> >>>
>> >>> hsb <- read.csv('
>> >>>
>> >>> https://raw.githubusercontent.com/rnorouzian/e/master/hsb.csv')
>> >>>
>> >>>
>> >>>
>> >>> library(lme4)
>> >>>
>> >>> m1 <- lmer(math ~ 1 + (1|sch.id), data = hsb)
>> >>>
>> >>> m2 <- lmer(math ~ meanses + (1|sch.id), data = hsb)
>> >>>
>> >>> m3 <- lmer(math ~ ses + (ses | sch.id), data = hsb)
>> >>>
>> >>> m4 <- lmer(math~ ses * meanses + (ses | sch.id ), data = hsb)
>> >>>
>> >>>
>> >>>
>> >>> ols1 <- lm(math ~ 1, data = hsb)
>> >>>
>> >>> ols2 <- lm(math ~ meanses, data = hsb)
>> >>>
>> >>> ols3 <- lm(math ~ ses, data = hsb)
>> >>>
>> >>> ols4 <- lm(math ~ ses * meanses, data = hsb)
>> >>>
>> >>>
>> >>>
>> >>> [[alternative HTML version deleted]]
>> >>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>>
>> >>> R-sig-mixed-models using r-project.org mailing list
>> >>>
>> >>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>> >>>
>> >>>
>>
>> [[alternative HTML version deleted]]
>>
>> _______________________________________________
>> R-sig-mixed-models using r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list