[R-sig-ME] anova (lm, lmer ) question
romunov
romunov at gmail.com
Thu Oct 2 13:38:58 CEST 2014
FWIW, this is from the glmm faq site <http://glmm.wikidot.com/faq>.
How can I test whether a random effect is significant?
- perhaps you shouldn't (if the random effect is part of the
experimental design, this procedure may be considered 'sacrificial
pseudoreplication' (Hurlburt 1984); using stepwise approaches to eliminate
non-significant terms in order to squeeze more significance out of the
remaining terms is dangerous in any case)
- *do not* compare lmer models with the corresponding lm fits, or
glmer/glm; the log-likelihoods are not commensurate (i.e., they include
different additive terms)
- consider using RLRsim for simple tests
- parametric bootstrap
- profile likelihood (using more recent versions of lme4?) to evaluate
likelihood at σ2=0
- keep in mind that LRT-based null hypothesis tests are conservative
when the null value (such as σ2=0) is on the boundary of the feasible
space; in the simplest case (single random effect variance), the p-value is
approximately twice as large as it should be [23]
Cheers,
Roman
On Thu, Oct 2, 2014 at 1:33 PM, Ben Pelzer <b.pelzer at maw.ru.nl> wrote:
> Dear list,
>
> Is it possible to use the anova( ) function to compare the deviances of a
> model1 (fixed intercept) and a model2 (random intercept):
>
> model1 <- lm (y ~ 1)
> model2 <- lmer (y ~ (1|country), REML=FALSE)
>
> In the above situation, one can use -2*LogLike(model1) and
> -2*LogLike(model2) to find both deviances, the difference of which then can
> tested. However, it would be nice if anova (model1, model2) could be used
> to this end. Is this possible somehow?
>
> Ben.
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
--
In God we trust, all others bring data.
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list