[R-sig-ME] understanding log-likelihood/model fit
Daniel Ezra Johnson
danielezrajohnson at gmail.com
Wed Aug 20 15:01:29 CEST 2008
Everyone agrees about what happens here:
Nsubj <- 10
Ngrp <- 2
NsubjRep <- 5
set.seed(123)
test1s <- data.frame(subject = rep(seq(Nsubj * Ngrp), each = NsubjRep),
response=500+c(rep(-100,Nsubj * NsubjRep),rep(100,Nsubj *
NsubjRep))+rnorm(Nsubj * Ngrp * NsubjRep, 0, 10),
fixed=(rep(c("A","B"),each=Nsubj * NsubjRep)))
null1 <- lmer(response~(1|subject),test1s)
fixed1 <- lmer(response~fixed+(1|subject),test1s)
I still have two questions which I'll try to restate. I should note
that I have attempted to understand the mathematical details of ML
mixed effect model fitting and it's a bit beyond me. But I hope that
someone can provide an answer I can understand.
Question 1: When you have an "outer" fixed effect and a "subject"
random effect in the same model, specifically why does the model
(apparently) converge in such a way that the fixed effect is maximized
and the random effect minimized? (Not so much why should it, as why
does it? This is the 'fixed1' case.)
Question 2: Take the fixed1 model from Question 1 and compare it to
the null1 model, which has a random subject effect but no fixed
effect. The predicted values of the two models -- the ones from
fitted(), which include the ranefs -- are virtually the same. So why
does fixed1 have a lower deviance, why is it preferred to null1 in a
likelihood ratio test? (Again, I'm not asking why it's a better model.
I'm asking questions about the software, estimation procedure, and/or
theory of likelihood applied to such cases.)
D
More information about the R-sig-mixed-models
mailing list