[R-sig-ME] Interpreting lmer() interactions with Helmert contrasts
Becky Gilbert
beckyannegilbert at gmail.com
Fri Aug 21 15:23:46 CEST 2015
Hi Paul/List,
After thinking about this a bit more, I don't think planned comparisons
gives me what I'm looking for. I want to know whether the effect of Time
is different for WordType = 0 (level 1) vs the other two levels combined -
this is WordType contrast 1. I also want to know whether the effect of
Time is different for WordType = 1 vs WordType = 2 (i.e. level 2 vs level
3) - this is WordType contrast 2.
I think the use of planned comparisons here would defeat the purpose of my
contrasts, but maybe I'm missing something?
Thanks!
Becky
____________________________________________
Dr Becky Gilbert
On 21 August 2015 at 13:46, Becky Gilbert <beckyannegilbert at gmail.com>
wrote:
> Hi Paul,
>
> Thanks very much for the suggestion! I tried using lsmeans() to get the
> pairwise comparisons as you suggested, and the results are below.
>
> I'm a little confused by the results because the pairwise comparison tests
> all show p > .05, but the WordType x Time interaction was significant when
> tested via model comparisons...? I think this might be due to the Tukey
> adjustment for multiple comparisons, but I'm not sure. Specifically the
> contrast for the two levels of Time at WordType = 2 looks like it might
> have been significant before the multiple comparisons correction, thus
> accounting for the significance of the interaction term in model
> comparisons. Any thoughts?
>
> Thanks again!
> Becky
>
> $lsmeans
> WordType = 0:
> Time lsmean SE df lower.CL upper.CL
> -1 2.880592 0.02209390 21.58 2.834721 2.926464
> 1 2.887315 0.02144245 22.13 2.842860 2.931769
>
> WordType = 1:
> Time lsmean SE df lower.CL upper.CL
> -1 2.856211 0.02156603 19.78 2.811193 2.901229
> 1 2.888640 0.02089339 20.17 2.845080 2.932200
>
> WordType = 2:
> Time lsmean SE df lower.CL upper.CL
> -1 2.852485 0.02181905 20.72 2.807072 2.897898
> 1 2.893827 0.02113775 21.12 2.849883 2.937770
>
> Confidence level used: 0.95
>
> $contrasts
> WordType = 0:
> contrast estimate SE df t.ratio p.value
> -1 - 1 -0.00672255 0.02078469 19.31 -0.323 0.7498
>
> WordType = 1:
> contrast estimate SE df t.ratio p.value
> -1 - 1 -0.03242907 0.02097452 20.02 -1.546 0.1377
>
> WordType = 2:
> contrast estimate SE df t.ratio p.value
> -1 - 1 -0.04134141 0.02146707 21.93 -1.926 0.0672
>
>
> ____________________________________________
>
> Dr Becky Gilbert
>
> On 21 August 2015 at 12:19, paul debes <paul.debes at utu.fi> wrote:
>
>> Hi Becky,
>>
>> Maybe you are interested in pairwise comparisons? The "lsmeans" package
>> comes in handy.
>>
>> Try something like this:
>>
>> library("pbkrtest") # gives you KW-adjusted denDF for tests, but must be
>> installed
>> library("lsmeans")
>>
>> Model.lmer.means = lsmeans(Model, spec = pairwise ~ WordType|Time)
>> Model.lmer.means = summary(Model.lmer.means)
>> Model.lmer.means
>>
>> Maybe you want the contrast conditional on WordType, not Time? Swap it to:
>> "spec = pairwise ~ Time|WordType"
>>
>> Best,
>> Paul
>>
>>
>> On Fri, 21 Aug 2015 14:04:07 +0300, Becky Gilbert <
>> beckyannegilbert at gmail.com> wrote:
>>
>> Dear list,
>>>
>>> I'm wondering if someone could help me interpret an interaction between
>>> two
>>> factors, when one of the factors uses Helmert contrasts?
>>>
>>> I ran a linear mixed effects model (lmer) with reaction times as the DV,
>>> 2
>>> fixed factors: Time (2 levels) and Word Type (3 levels), and 2 random
>>> factors: Subjects and Items. I used Helmert contrasts for the Word Type
>>> factor:
>>> - Contrast 1 = level 1 (Untrained) vs levels 2 & 3 (Trained-related and
>>> Trained-unrelated)
>>> - Contrast 2 = level 2 vs. level 3 (Trained-related vs Trained-unrelated)
>>> The data, contrasts, model, summary and model comparisons are listed at
>>> the
>>> end of the message.
>>>
>>> Model comparisons with anova() showed a significant interaction between
>>> Time and Word Type. However, I don't know how to get the statistics for
>>> the interactions between Time and each Word Type contrast.
>>>
>>> Based on the t-values for coefficients in the model summary, it looks
>>> like
>>> the significant Word Type x Time interaction is driven by the interaction
>>> with the 1st contrast for Word Type (t = 2.61). However I don't think
>>> that
>>> the statistics for the fixed effects coefficients are exactly what I'm
>>> looking forward (they are sequential tests, right?). And if these are
>>> the
>>> appropriate statistics, I'm aware of the problems with trying to get
>>> p-values from these estimates. So is there a way to do likelihood ratio
>>> tests for each Word Type contrast, or some other way of interpreting the
>>> Word Type x Time interaction?
>>>
>>> Data structure:
>>>
>>>> str(rtData)
>>>>
>>> 'data.frame': 1244 obs. of 11 variables:
>>> $ Subject : Factor w/ 16 levels "AB","AS","AW",..: 1 1 1 1 1 1 1 1
>>> 1
>>> 1 ...
>>> $ Item : Factor w/ 48 levels "ANT","BANDAGE",..: 3 4 6 12 13 14 22
>>> 29 30 34 ...
>>> $ Response : int 960 1255 651 1043 671 643 743 695 965 589 ...
>>> $ Time : Factor w/ 2 levels "-1","1": 1 1 1 1 1 1 1 1 1 1 ...
>>> $ WordType : Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
>>> $ logRT : num 2.98 3.1 2.81 3.02 2.83 ...
>>>
>>> contrasts(rtData$Time)
>>>>
>>> [,1]
>>> -1 0.5
>>> 1 -0.5
>>>
>>> contrasts(rtData$WordType)
>>>>
>>> [,1] [,2]
>>> 0 0.6666667 0.0
>>> 1 -0.3333333 -0.5
>>> 2 -0.3333333 0.5
>>>
>>> Model:
>>> lmer(logRT ~ 1 + WordType + Time + WordType:Time +
>>> (1 + Time|Subject) +
>>> (1|Item),
>>> data = rtData)
>>>
>>> REML criterion at convergence: -2061.2
>>> Scaled residuals:
>>> Min 1Q Median 3Q Max
>>> -2.7228 -0.6588 -0.0872 0.5712 3.7790
>>> Random effects:
>>> Groups Name Variance Std.Dev. Corr
>>> Item (Intercept) 0.000933 0.03054
>>> Subject (Intercept) 0.004590 0.06775
>>> Time1 0.005591 0.07478 0.05
>>> Residual 0.009575 0.09785
>>> Number of obs: 1244, groups: Target, 46; Subject, 16
>>> Fixed effects:
>>> Estimate Std. Error t value
>>> (Intercept) 2.8765116 0.0177527 162.03
>>> WordType1 0.0111628 0.0110852 1.01
>>> WordType2 0.0007306 0.0071519 0.10
>>> Time1 -0.0268310 0.0195248 -1.37
>>> WordType1:Time1 0.0301627 0.0115349 2.61
>>> WordType2:Time1 -0.0089123 0.0141624 -0.63
>>>
>>> Model comparisons with anova() for main effects and interaction:
>>>
>>> -full model vs no Word Type x Time interaction
>>> Df AIC BIC logLik deviance Chisq
>>> Chi Df Pr(>Chisq)
>>> rtModelNoInteraction 9 -2077.5 -2031.3 1047.7 -2095.5
>>>
>>> rtModelFull 11 -2080.5 -2024.1 1051.2 -2102.5 7.0388
>>> 2
>>> 0.02962 *
>>>
>>> -full model vs model without Time and interaction
>>> Df AIC BIC logLik deviance Chisq Chi Df
>>> Pr(>Chisq)
>>> rtModelNoTime 8 -2077.8 -2036.7 1046.9 -2093.8
>>> rtModelFull 11 -2080.5 -2024.1 1051.2 -2102.5 8.7424 3
>>> 0.03292 *
>>>
>>> -full model vs model without Word Type and interaction
>>> Df AIC BIC logLik deviance Chisq Chi Df
>>> Pr(>Chisq)
>>> rtModelNoWT 7 -2080.4 -2044.5 1047.2 -2094.4
>>> rtModelFull 11 -2080.5 -2024.1 1051.2 -2102.5 8.0875 4
>>> 0.08842
>>> .
>>>
>>> Thanks in advance for any advice!
>>> Becky
>>> ____________________________________________
>>>
>>> Dr Becky Gilbert
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-mixed-models at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>>
>>
>> --
>> Paul Debes
>> DFG Research Fellow
>> University of Turku
>> Department of Biology
>> Itäinen Pitkäkatu 4
>> 20520 Turku
>> Finland
>>
>
>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list