[R-sig-ME] Selecting random effects in lme4: ML vs. QAICc

Ben Bolker bbolker at gmail.com
Sun Aug 15 23:46:06 CEST 2010


  There are a number of issues here, I will comment in-line ...

Richard Feldman wrote:
> I am trying to select between two models that differ in their random
> effects. Running a likelihood test gives me a different result than
> using information criteria.

  In general you shouldn't be applying both methods in the same
analysis, except for pedagogical/self-teaching purposes. The two tests
answer different questions, and trying both leads to the temptation to
cherry-pick.
>
> My models:
>
> model.1 <- glmer(Y ~ V1 + V2 + V1:V2 + (V2|SITE), data=Data, family =
> quasipoisson, REML = TRUE)
>
> model.2 <- glmer(Y ~ V1 + V2 + V1:V2 + (1|SITE), data=Data, family =
> quasipoisson, REML = TRUE)

  Specifying REML=TRUE has no effect in glmer models.  (I don't know
offhand whether you got a warning, arguably you should have.)  Note that
in ?glmer there is no REML argument specified for glmer -- there is no
warning because it gets swallowed by the optional "..." argument.

> I use quasipoisson because they are highly overdispersed:
>
> lme4:::sigma(model.1) #5.886659 lme4:::sigma(model.2) #101.6434
 
  If you are going to fit a quasi-likelihood model then generally you
should find sigma for the most complex model (model 1 in your case) and
then apply the same estimated value of sigma for all models.

> The results of the likelihood test: (I know that technically one
> should not use (RE)ML for quasi-distributions. However the results
> are nearly identical whether I use quasipoisson or poisson as the
> family)

  Clarification: generally one should not use likelihood ratio tests (ML
vs REML is a separate issue) at all on quasi-likelihood fits, although
Venables and Ripley (MASS, p. 210) do suggest using an F test on the
scaled difference in deviance (this is implemented for GLMs with the
'test="F"' option to anova.glm()).  (In contrast, they say in the same
section that one *cannot* use AIC in this case, because one doesn't have
a real likelihood -- presumably they don't agree with the line of
reasoning leading to QAIC.)

   I'm also assuming that your sample size is large enough that the LRT applies (which is often a problem with GLMMs 
if the number of random-effects levels is small)


> anova(model.2, model.1)
>
> #         Df    AIC    BIC   logLik  Chisq Chi Df Pr(>Chisq) #model.2
> 9 4648.3 4665.1 -2315.13 #model.1 14  346.8  373.0  -159.39 4311.5
> 5  < 2.2e-16 ***

  Besides the fact that you shouldn't do this, it's unlikely that
anova() is correcting for overdispersion.
>
> Now, I run the same models with a poisson distribution and then
> adjust the AIC by the overdispersion and the number of parameters to
> obtain QAICc. With all these penalties taken into account, model.2
> has the lowest QAICc. My gut instinct is to go with model.1, however,
> because the overdispersion of model.2 is so high that perhaps it
> shouldn't even be a candidate model. On the other hand, perhaps
> adjusting the AIC really does put the two models on a more level
> playing field.

  How did you calculate overdispersion?  If you're going to do this (and
there are still some questions about whether this works) you should use
sigma^2, not sigma, as your estimate of overdispersion.  If you are
(heaven forbid) following the supplementary material of the Bolker et al
2009 TREE article, please look in the worked examples section of
glmm.wikidot.com for an updated and corrected version.




More information about the R-sig-mixed-models mailing list