[R-sig-ME] Replicating type III anova tests for glmer/GLMM
Henrik Singmann
singmann at psychologie.uzh.ch
Tue Feb 23 10:28:58 CET 2016
Hi Francesco,
As far as I see it, there are basically two ways to get these tests easily.
1) afex::mixed()
You simple pass your model as you would to glmer() while also specifying
that you want likelihood-ratio tests as method for testing the effects,
e.g.,
m1 <- mixed(x~y + (y|z), data = d, method = "LRT", family = binomial)
m1 # prints tests of effects
nice(m1) # same, see also anova(m1)
summary(m1) # shows summary of full model
Note that mixed already takes care of the correct coding (i.e., uses
sum.coding as default) and uses type 3 tests as default. You can also
obtain bootstrapped p-values instead of approximate likelihood ratio
tests by specifying method = "PB" in the call to mixed (but this might
take some time, see the mixed examples of how to use multicore).
2) car::Anova()
You first need to fit the model but need to specify the correct coding
and then pass it to car::Anova() to obtain Wald-tests of the effects, e.g.,:
afex::set_sum_contrasts() # sets sum coding globally
m2 <- glmer(x~y + (y|z), data = d, family = binomial)
car::Anova(m2, type = 3)
The difference between the two options is that mixed() fits a model for
each effect to be tested while car::Anova() only reuqires one model to
be fitted. The consequence is that the former might take quite longer
(again, you can distribute the fitting of the individual models across
cores, see the mixed examples). On the other hand,
likelihood-ratio-tests are in principle less problematic than Wald tests
(http://glmm.wikidot.com/faq#toc4).
Finally, mixed allows to easily suppress the estimation of correlation
among random slopes for factors by using the expand_re argument which
may help with model convergence:
m3 <- mixed(x~y + (y||z), data = d, method = "LRT", family = binomial,
expand_re = TRUE)
What I would report would probably the Chi-square values and associated
p-values for the tests of effects. Reporting coefficients and associated
standard errors usually only makes sense if there are two groups. And
for interactions they can also be difficult to interpret. So I would go
with the chi-square values in the first place.
Hope that helps,
Henrik
Am 23.02.2016 um 02:12 schrieb Francesco Romano:
> Dear all,
>
> I'm trying to report my analysis replicating the method in the following
> papers:
>
> Cai, Pickering, and Branigan (2012). Mapping concepts to syntax: Evidence
> from structural priming in Mandarin Chinese. Journal of Memory and Language 66
> (2012) 833–849. (looking at pg. 842, "Combined analysis of Experiments 1
> and 2" section)
>
> Filiaci, Sorace, and Carreiras (2013). Anaphoric biases of null and overt
> subjects in Italian and Spanish: a cross-linguistic comparison. Language,
> Cognition, and Neuroscience DOI:10.1080/01690965.2013.801502 (looking at
> pg.11, first two paragraphs)
>
> This is because I have a glmer model with three fixed effects, two random
> intercepts modeling a binary outcome, exactly as in the articles mentioned.
>
> The difficulty I'm finding is with locating information on commands
> generating coefficients, SE, z, and p values (e.g. maximum likelihood
> (Laplace Approximation)) to report main effects and interactions with the
> anova or afex:mixed commands, following application of effect coding. I
> have looked in several places, including Ben Bolker's FAQ
> http://glmm.wikidot.com/faq and past posts on the topic in this r-sig.
> Although there appears to be a plethora of material for lmer, I can't seem
> to locate anything in the right direction for glmer.
>
> Many thanks for any help.
>
>
>
>
More information about the R-sig-mixed-models
mailing list