[R-sig-ME] GLMM estimation readings?
pauljohn32 at gmail.com
Mon Apr 4 13:20:52 CEST 2016
I'm trying to explain GLMM estimation and defend random effects in an
audience of econometricians. I am focused on logit models, mostly, with
The literature is difficult. I can understand applications and overviews
like the paper by Bolker et al in Trends in Ecology and Evolution. But I
can't understand much about glmer that is deeper than that. Can you point
me at some books/articles that explain the GLMM estimation process in an
understandable way? Is there a PLS derivation for GLMM? I want to better
understand adaptive quadrature. And Laplace approximation.
You might be able to advise me better if I tell you why I need to know.
I was surprised to learn that economists hate random effects. It is almost
visceral. For the economists, the fixed vs random effects debate is not
philosophical, but rather practical. In an LMM, the Hausman test seems to
bluntly reject almost all random effects models. (See William Greene's
Econometrics book). Unmeasured group-level predictors always exist, it
seems, so random effects estimates are biased/inconsistent. Even if you
believe Intercept differences are random, LMM estimates are
biased/inconsistent, so you should treat as fixed.
I'm a little surprised there is so little discussion of Hausman's test in
the random effect literature outside economics.
One argument used by random effect advocates, that group-level predictors
can be included in LMM, holds no weight at all. It is just evidence of bias
in LMM. Well, the estimates thus obtained are useless because, if the group
level intercept estimates were correct, then the group-level predictors
would not be identifiable.
My argument with them so far is based on the characterization of LMM as a
PLS exercise, which I learned in this email list. That makes a point
obvious: the fixed vs random models differ because PLS penalizes the b's,
but fixed estimators do not. The issue is not "random" against "fixed". It
is penalized against unpenalized. The parallel between LMM and ridge
regression and LASSO helps. If the number of observations within groups
grows, then the posterior modes and the fixed effect estimates converge.
The small sample debate hinges on mean square error of the b's. The PLS
view makes it plain that Empirical Bayes gives shrinkage not as
afterthought (as it seems in the GLS narrative), but as a primary element
(Henderson's estimator). Ironically, the shrinkage effect of LMM, widely
praised in stats and hierarchical modeling applications, raises suspicion
of bias. One might prefer a biased, but lower variance estimate of b, and
that's shrinkage. That's my theme, anyway, we'll see if I can sell it.
In that context, I come to the chore of comparing a glmer estimate with a
fixed effect method known as conditional logit or Chamberlain's panel logit
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models