[R-sig-ME] Dealing with heteroscedasticity in a GLM/M
Paul Johnson
pauljohn32 at gmail.com
Mon Aug 27 18:53:15 CEST 2012
On Thu, Aug 23, 2012 at 12:58 AM, Leila Brook <leila.brook at my.jcu.edu.au> wrote:
> I am hoping to find a way to account for heterogeneity of variance between categories of explanatory variables in a generalised model.
>
> I have searched books and this forum, and haven't found any advice that I understood could help account for this assumption in a generalised model context, as I can't fit a variance structure in lme4.
>
>
>
> As background to my study:
>
> I used camera stations set up in pairs (one positioned on a track and one off the track) to record my study species, and used the same pairs in each of two seasons. As my surveys were repeated, I have specified camera pair as a random effect. I am using a binomial model in lme4 to model the proportion of nights an animal was recorded, as a function of the fixed effects of season (2 categories), position (categorical: whether on or off the track), area (categorical: one of two areas) and continuous habitat variables, plus interactions between them.
>
>
>
> I validated the GLM form of the model, including plotting the deviance residuals against my explanatory variables, and have noted that the variance of residuals for the categorical variables appears to differ.
>
Stop there.
In Logit/Probit frameworks, the variance is assumed equal for all
groups. It is never estimated. The model is not identified otherwise.
The effect of heteroskedasticity is not just inefficiency, but
parameter bias. This makes logit models much more suspicious than
previously believed. This means that all of the work you have done so
far to "validate" your model is dubious and you need to take a step
back.
We are in a bind with logit models. Either we estimate separate models
for the separate groups (to avoid heteroskedasticity), but we are not
able to compare coefficients across models because there is that
different, but un-estimated variance. Or we fit one model that
combines the group, make the wrong assumption, and end up with wrong
parameter estimates. I don't mean just a little off. I mean wrong.
Its discouraging.
As far as I know, this problem was first popularized by Paul Allison,
Scott Long, and Richard Williams, but it is nicely surveyed in this
review essay:
Mood, Carina. 2010. Logistic Regression: Why We Cannot Do What We
Think We Can DO, and What We Can Do About It. European Sociological
Review 26(1): 67-82.
That has cites to the earlier Allison paper and some of Williams's work.
In my opinion, there are no completely safe approaches to dealing with
the heteroskedastic group-level error. Richard Williams at Notre Dame
gave an excellent presentation about it. He told me he has a paper
forthcoming in the Stata journal about it, but I don't feel free to
pass it along to you. But I bet his website has more information.
It seems to me that if you try to "pin" one group as the "baseline
variance" group and then add properly structured random effects for
the other ones, you might get a handle on it. The R package dglm has
suggestions like that.
Good luck. If you get an answer, I'd really like to know what is the
state of the art now (this minute)...
--
Paul E. Johnson
Professor, Political Science Assoc. Director
1541 Lilac Lane, Room 504 Center for Research Methods
University of Kansas University of Kansas
http://pj.freefaculty.org http://quant.ku.edu
More information about the R-sig-mixed-models
mailing list