[R-sig-ME] model averaging
Ben Bolker
bbolker at gmail.com
Thu Sep 20 18:50:22 CEST 2012
Paul York <frgger372 at ...> writes:
> I think I understand that if you are comparing models using AICc, you
> should use ML to compare models with the same random effects but different
> fixed effects. Therefore ML should be used during model selection.
> However, when you present the effect sizes of your final model, you should
> use REML, because it provides better estimates of beta (please correct me
> if I'm wrong here!). However, I am now interested in proceeding to model
> averaging, and I'm unclear whether I should be using ML or REML for this
> stage of the analysis - ML will provide better estimates of AIC weights (I
> assume?) but REML will provide better estimates of beta. So does anyone
> know which I should be using?
Hmmm. I'm not sure, but ... my understanding was that REML provided
unbiased (for specific categories of models) estimates of *variances*,
but that the fixed-effect (beta) coefficients were identical. You can
test this empirically for one special case:
> (fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy))
> fm1ML <- update(fm1,REML=FALSE)
> fixef(fm1)
(Intercept) Days
251.40510 10.46729
> fixef(fm1ML)
(Intercept) Days
251.40510 10.46729
The argument about ML vs REML comparisons strictly speaking applies
to likelihood ratio tests, marginal F tests, and other tests that assume
nestedness, but I think it's probably a good idea by extension to use
ML for other types of model comparison. I would suggest using it for
model averaging as well.
More information about the R-sig-mixed-models
mailing list