[R-sig-ME] Presenting results of a mixed model containing factors
Henrik Singmann
henrik.singmann at psychologie.uni-freiburg.de
Wed Sep 18 19:21:19 CEST 2013
Hi Sarah,
One way to globally set sum to zero contrasts is by setting the contrasts via options:
options(contrasts=c('contr.sum', 'contr.poly'))
This is automatically done when loading afex which also provides you with the overall effect you are interested in (using the Kenward-Rogers approximation):
require(afex)
dat <- read.table("http://pastebin.com/raw.php?i=bHug5kTt", header = TRUE)
mixed(DV~TMT1*TMT2+(1|Block/TMT1), dat)
## Effect stat ndf ddf F.scaling p.value
## 1 (Intercept) 377.3464 1 5 1 0.0000
## 2 TMT1 3.2653 1 5 1 0.1306
## 3 TMT2 13.2433 1 10 1 0.0045
## 4 TMT1:TMT2 27.0271 1 10 1 0.0004
Alternatively you can get the same results from car::Anova:
Anova(m1)
## Analysis of Deviance Table (Type II Wald chisquare tests)
##
## Response: DV
## Chisq Df Pr(>Chisq)
## TMT1 3.2653 1 0.0707600 .
## TMT2 13.2433 1 0.0002736 ***
## TMT1:TMT2 27.0271 1 2.006e-07 ***
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
As your design is fully balanced this is equivalent to type three tests (the default for afex):
Anova(m1, type = 3)
Hope that helps.
Cheers,
Henrik
Am 18.09.2013 18:20, schrieb Sarah Dryhurst:
> Hi Ben,
>
> I'm sorry to dredge up this post again - I thought my reply had sent but I
> checked and it was sat in my drafts folder.
>
> My question was on the sum to zero contrasts you suggested. Is there a
> recommended way to calculate these in R, or any references I can read about
> them (I am a complete novice). I was looking into using anova or drop1 on
> my models to get these "average" effects of the main effects and
> interaction, however these two only seem to take into account fixed effects
> in their calculations (do they use Wald statistics?) which seems pointless
> to me. What is your view on these two? My design is a balanced one... I
> know the glmm wiki recommends using a mcmc approach:
>
> "Tests of effects (i.e. testing that several parameters are simultaneously
> zero)
>
>>From worst to best:
>
> - Wald chi-square tests (e.g. car::Anova)
> - Likelihood ratio test (via anova or drop1)
> - *For balanced, nested LMMs* where df can be computed: conditional
> F-tests
> - *For LMMs*: conditional F-tests with df correction (e.g. Kenward-Roger
> in pbkrtest package)
> - MCMC or parametric, or nonparametric, bootstrap comparisons
> (nonparametric bootstrapping must be implemented carefully to account for
> grouping factors)"
>
>
> I have done this via pvals.fnc(m1) in languageR but it still only gives the
> a similar output to my original (with no way of knowing the average
> effects). My concern is that the average effects seem to be what
> supervisors/reviewers want reported i.e. the effect of Treatment1, rather
> than the effect of Treatment1 level A compared to Treatment 1 level B
> etc....
>
> Any thoughts would be much appreciated! I'm finding it hard to find a
> consensus anywhere. It is difficult to track down examples of reporting
> these things - most focus seems to be on interpretation. Thank you once
> again for your advice.
>
> Sarah
>
>
> On Mon, Sep 9, 2013 at 10:54 PM, Ben Bolker <bbolker at gmail.com> wrote:
>
>> Sarah Dryhurst <s.dryhurst at ...> writes:
>>
>>>
>>> Hi Ben,
>>>
>>> Thank you for your reply! I don't really want to calculate the main
>>> effects as it doesn't make much biological sense (to me!). I just
>>> wasn't sure whether this was "required" in terms of statistical
>>> reporting. That interaction effect is what I am interested in
>>> largely, as it's the combined effect of the different treatments that
>>> is my focus.
>>
>> I think it would still be worth reporting the main effects, as
>> their size puts the size of the interaction in perspective (e.g. I
>> would generally like to be able to judge the size of the interaction
>> *relative to* the main effects, not just its magnitude/t statistic/
>> p value ...
>>
>>>
>>> With regards to the lack of variance at the Block level, would you
>>> recommend dropping this level here? It doesn't seem to make too much
>>> sense to keep it there...
>>
>> Doesn't matter too much, since the results will be almost identical.
>> May be worth checking without, to double-check that the zero-variance
>> result hasn't thrown off the optimization, but I would personally
>> probably err on the side of reporting it (or say that it was in
>> the original model but estimated as being effectively zero).
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>
>
>
--
Dipl. Psych. Henrik Singmann
PhD Student
Albert-Ludwigs-Universität Freiburg, Germany
http://www.psychologie.uni-freiburg.de/Members/singmann
More information about the R-sig-mixed-models
mailing list