# [R-sig-ME] Interpreting Mixed Effects Model on Fully Within-Subjects Design

Dave Deriso deriso at gmail.com
Thu May 20 20:56:56 CEST 2010

```Hi Daniel,

Thank you so much for taking the time to explain this.

I guess I am having trouble believing the results.

condition x diff p=.001
condition p=.001
diff p=.001

When I run the ANOVA, I get more conservative results:

summary(aov(value~condition*diff*rep + Error(subject/(condition*diff*rep))))
condition x diff  p=2.311e-05 ***

summary(aov(value~(condition*rep+diff*rep)+Error(subject/(condition*rep+diff*rep))))
condition p=0.02116 *
diff p=2.2e-16 ***

How do I stay conservative here?

Best,
Dave

On Thu, May 20, 2010 at 11:38 AM, Daniel Ezra Johnson
<danielezrajohnson at gmail.com> wrote:
>>> m0 = lme(value~condition+diff,random=~1|subject/rep)
>>> m1 = lme(value~condition*diff,random=~1|subject/rep)
>>> anova(m0,m1)
>
> This will give you the p-value for the interaction since that's the
> only thing different between the two models.
>
> if you similarly compared
>
> lme(value~condition+diff...) to
> lme(value~condition...)
>
> that would be a test of "diff", whereas comparing
>
> lme(value~condition+diff...) to
> lme(value~diff...)
>
> that would be a test of "condition".
>
> there's more than one way to test these, but using anova() like this,
> i think is reasonable.
> of course, if you follow the mixed-model literature you'll see that
> people have shown these tests (likelihood-ratio tests) to be
> anti-conservative (p-values too high) when applied to mixed models..
>
> Dan
>

```