[R-sig-ME] Chisq = 0 and p = 1 with nested lmer models

Joshua Wiley jwiley.psych at gmail.com
Thu Sep 22 04:43:32 CEST 2011


On Fri, Sep 16, 2011 at 3:18 PM, Ben Bolker <bbolker at gmail.com> wrote:
> Geoff Brookshire <creepingbrain at ...> writes:
>
>>
>> Dear all,
>>
>> I'm using lme4 to analyze some reaction time data I've collected, and
>> am running into an odd problem. I'm not certain if it's a bug or a
>> problem with my thinking, but hopefully (and probably) it's the latter.
>>
>> When trying to compare two models, one of which is the same as the
>> other but with one term removed, the result is that there is no
>> difference between the models: Chisq = 0, Chisq Df = 0, and p = 1.
>>
>> A little about the design:
>> This is a reaction-time experiment with a 2x2x2 design over factors
>> F1, F2, and F3. F1 is varied within-subjects and within-items, and F2
>> & F3 are varied within-subjects and between-items.
>>
>> So far, I've been testing the models like this:
>>
>> fm.1 <- lmer(outcome ~ F1*F2*F3 + (1 | subject) + (1 | item),
>>    data = samp.dat, REML = FALSE)
>> fm.2 <- lmer(outcome ~ F1*F2*F3 - F1:F2:F3 + (1 | subject) + (1 | item),
>>    data = samp.dat, REML = FALSE)
>> anova(fm.1, fm.2)
>>
>> This works, and gives what we expect. When I try to test the 2-term
>> interactions, though, the error comes up:
>>
>> fm.3 <- lmer(outcome ~ F1*F2*F3 - F1:F2 + (1 | subject) + (1 | item),
>>    data = samp.dat, REML = FALSE)
>> anova(fm.1, fm.3)
>>
>> This gives the following output:
>>
>> fm.1: outcome ~ F1 * F2 * F3 + (1 | subject) + (1 | item)
>> fm.3: outcome ~ F1 * F2 * F3 - F1:F2 + (1 | subject) + (1 | item)
>>      Df     AIC     BIC logLik Chisq Chi Df Pr(>Chisq)
>> fm.1  11 -677.34 -617.72 349.67
>> fm.3  11 -677.34 -617.72 349.67     0      0          1
>>
>> The same thing happens when looking at F2:F3 and F1:F3. Any ideas?
>> Just entering anova(fm.1) gives sensible results and F-values for the
>> 2-way interactions.
>
>  When you subtract F1:F2 from F1*F2*F3 you are still
> going to get a fully specified model, it will just be reparameterized
> (i.e. you won't reduce the number of parameters this way). This is
> because of the way that R interprets model formulae (i.e. it's more
> basic than lme4).
>
>  Here's some an experiment confirming this:
>
>> d <- expand.grid(F1=LETTERS[1:2],F2=letters[1:2],F3=factor(1:2))
>> d
>  F1 F2 F3
> 1  A  a  1
> 2  B  a  1
> 3  A  b  1
> 4  B  b  1
> 5  A  a  2
> 6  B  a  2
> 7  A  b  2
> 8  B  b  2
>> model.matrix(~F1*F2*F3,d)
>  (Intercept) F1B F2b F32 F1B:F2b F1B:F32 F2b:F32 F1B:F2b:F32
> 1           1   0   0   0       0       0       0           0
> 2           1   1   0   0       0       0       0           0
> 3           1   0   1   0       0       0       0           0
> 4           1   1   1   0       1       0       0           0
> 5           1   0   0   1       0       0       0           0
> 6           1   1   0   1       0       1       0           0
> 7           1   0   1   1       0       0       1           0
> 8           1   1   1   1       1       1       1           1
>
>
>> model.matrix(~F1*F2*F3-F1:F2,d)
>  (Intercept) F1B F2b F32 F1B:F32 F2b:F32 F1B:F2b:F31 F1B:F2b:F32
> 1           1   0   0   0       0       0           0           0
> 2           1   1   0   0       0       0           0           0
> 3           1   0   1   0       0       0           0           0
> 4           1   1   1   0       0       0           1           0
> 5           1   0   0   1       0       0           0           0
> 6           1   1   0   1       1       0           0           0
> 7           1   0   1   1       0       1           0           0
> 8           1   1   1   1       1       1           0           1
>
>  If you have really thought this through and know that you
> have a sensible hypothesis corresponding to setting parameters
> 5 through 7 of the original parameterization to zero, you can do
> it ...
>
> d <- transform(d,F3way=as.numeric((F1:F2:F3=="B:b:2")))
> model.matrix(~F1+F2+F3+F3way,data=d)
>
> A more tedious but general way to do this is to take the
> original model matrix, pull out the bits you want, rename
> them if necessary (colons in names will not play nicely
> with formulae) and use the numeric dummy variables as
> predictors in a new model.

I could be way off mark, but given that most model functions accept
matrices directly, isn't the easy way something like:

tmp <- model.matrix(~ hp + factor(am) * factor(vs), data = mtcars)
summary(lm(mtcars$mpg ~ tmp - 1))
summary(lm(mpg ~ hp + factor(am) * factor(vs), data = mtcars))

tmp2 <- tmp[, -3]
lmer(mpg ~ -1 + tmp2 + (1|cyl), data = mtcars)

?

lmer even seems fine with it.

>
>  Hope that helps.
>
>  Ben Bolker
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>



-- 
Joshua Wiley
Ph.D. Student, Health Psychology
Programmer Analyst II, ATS Statistical Consulting Group
University of California, Los Angeles
https://joshuawiley.com/




More information about the R-sig-mixed-models mailing list