[R-sig-ME] glmer Z-test with individual random effects
John Maindonald
john.maindonald at anu.edu.au
Fri Nov 12 00:45:00 CET 2010
The Wald tests (as represented in glm and glmer output of z-statistics
and p-values) are approximate for both glm models and glmm models
with non-identity links. The approximation can fail badly if the link is
highly non-linear over a region of the response that is relevant for a
parameter of interest. The Hauck-Donner phenomenon, where the
z-statistic decreases as the effect estimate increases, is an extreme
example.
(This happens, e.g., as one of the levels being compared gives a
fitted value that moves close to a binomial proportion of 0 or 1, or
close to a Poisson estimate of 0.)
The additional complication for a glmm is that the SE may have
two components -- e.g., a poisson or binomial error, and a random
normal error that is added on the scale of the linear predictor. This
random normal error somewhat alleviates variance change effects
(including Hauck-Donner) that result from non-linearity in the link,
while adding uncertainty to the SE estimate. If the contribution from
the glm family error term is largish relative to contribution from the
random normal error (> 3 or 4 times as large?), treating the z-statistic
as normal may not in many circumstances be too unreasonable,
even if the relevant degrees of freedom are as small as maybe 4
(e.g., where the test is for consistency across 5 locations).
On data where I seem to have this situation, fairly consistently
across a number of responses, I initially used MCMCglmm().
I was concerned about the contribution from the random normal
error (including a contribution from an observation level random
effect term). I found I hat I had to choose a somewhat informative
prior (inverse Wishart with V=1, nu=0.002) to consistently get
convergence. With this prior, the MCMCglmm credible intervals
were remarkably close to glmer confidence intervals, treating the
z-statistics as normal.
No doubt the effect of the chosen prior is to insist that the true
random normal error variance is fairly close to the variance as
estimated by glmer(). A frustration (for me, at least) with using
MCMCglmm is that I do not know just what MCMCglmm() is doing
in this respect, short of doing some careful investigative exploration
of the MCMC simulation results (which is an after-the-event check,
where I'd like to know before the event). Comments, Jarrod?
All this is to emphasise that we are not in the arena, unless the
relevant degrees of freedom are large, of precise science.
John Maindonald email: john.maindonald at anu.edu.au
phone : +61 2 (6125)3473 fax : +61 2(6125)5549
Centre for Mathematics & Its Applications, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.
http://www.maths.anu.edu.au/~johnm
On 12/11/2010, at 2:18 AM, Ben Bolker wrote:
> On 11/11/2010 09:58 AM, Jens Åström wrote:
>> Dear list,
>>
>> As I have read (Bolker et al. 2009 TREE), the Wald Z test is only
>> appropriate for GLMMs in cases without overdispersion.
>>
>> Assuming we use family=poisson with lmer and tackle overdispersion by
>> incorporating an individual random effect AND this adequately "reduces"
>> the overdispersion, is it then OK to use the Wald z test as reported by
>> lmer?
>>
>> In other words, are the p-values reported by lmer in those cases
>> useful/"correct"? Or do they suffer from the usual problems with
>> figuring out the number of parameters used by the random effects?
>
> They are equivalent to assuming an infinite/large 'denominator degrees
> of freedom'. If you have a large sample size (both a large number of
> total samples relative to the number of parameters, and a large number
> of random-effects levels/blocks) then this should be reasonable -- if
> not, then yes, the 'usual problems with figuring out the number of
> parameters' is relevant. On the other hand, if you're willing to assume
> that the sample size is large, then likelihood ratio rests
> (anova(model1,model2)) are probably better than the Wald tests anyway.
>
>>
>> Secondly, is it good practice to judge lmer's capability of "reducing"
>> the overdispersion by summing the squared residuals (pearson) and
>> compare this to a chi square distribution (with N-1 degrees of freedom)?
>
> I would say this is reasonable, although again it's a rough guide
> because the true degrees of freedom are a bit fuzzy -- it should
> probably be at most N-(fixed effect degrees of freedom)?
>
> Would be happy to hear any conflicting opinions.
>
> Ben Bolker
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
More information about the R-sig-mixed-models
mailing list