[R-sig-ME] ML or REML for LR tests

Austin Frank austin.frank at gmail.com
Fri Aug 29 06:47:19 CEST 2008


On Thu, Aug 28 2008, Doran, Harold wrote:

>> The likelihood-ratio test approach directly compares these two.
>
> Since these models differ in their fixed effects, you need REML=FALSE
> for the LRT to be meaningful.

This is a standard operating procedure that I picked up and accepted on
faith when I first started using lmer, before I really knew what I was
doing.  It occurs to me that this is the case for much of my
understanding of model comparison, so I'd like to check my understanding
of the use of LR tests with lmer.  If this is a case of RTFM, please
provide a pointer to the relevant Friendly Manual ;)

1) Can anyone offer a reference where the case is made for doing LR
tests on models fit by ML (as opposed to REML)?

2) Can non-nested ML models with the same number of fixed effects be
meaningfully compared with an LR test?  Something like:

--8<---------------cut here---------------start------------->8---
data(sleepstudy)
set.seed(535353)
sleepstudy$Fake <- rnorm(nrow(sleepstudy))
m1 <- lmer(Reaction ~ Days + (1 | Subject), sleepstudy, REML=FALSE)
m2 <- lmer(Reaction ~ Fake + (1 | Subject), sleepstudy, REML=FALSE)
anova(m1, m2)                          # Is this test meaningful...

## When possible, test against superset model
m12 <- lmer(Reaction ~ Days + Fake (1 | Subject),
            sleepstudy, REML=FALSE)
anova(m1, m2, m12)                     # ... or only this one?
--8<---------------cut here---------------end--------------->8---

3) Is it the case that LR tests between REML models with different
random effects are meaningful?  Does this apply to both nested and
non-nested models?

Thanks for the help,
/au

-- 
Austin Frank
http://aufrank.net
GPG Public Key (D7398C2F): http://aufrank.net/personal.asc




More information about the R-sig-mixed-models mailing list