[R-sig-ME] GLMM estimation readings?

Malcolm Fairbrother M.Fairbrother at bristol.ac.uk
Tue Apr 5 12:02:08 CEST 2016


Hi Paul,
(1) To my mind, it would be helpful to keep specification and estimation
separate. RE vs. FE is a specification issue (more on this below), while
adaptive quadrature vs. Laplace approximation is an estimation issue. (And
that doesn't get into issues of Bayesian vs. frequentist, which might arise
if you start using MCMC estimation, which personally I find quite useful
for G/LMMs. The simulation studies I've seen have generally found that MCMC
is most reliable.)
(2) Hausman: This simply tests whether the within-group and between-group
effects are different. Usually they will be, and the Hausman test will tell
you so. Economists (but not only economists) often take this to mean that
Hausman is a "specification test" adjudicating between RE and FE models,
and thus usually "rejecting" RE. However, as you suggest, one can easily
allow for different between and within effects in a RE model, simply by
including group means of each X as level-2 covariates. There's a lot of
misunderstanding about this.
(3) Bias: If the between and within effects are different, but you estimate
a single beta that does not distinguish between them, that beta will be a
weighted average of the two. As the economists suggest, this beta will be a
biased estimator of the purely within estimate that you get from a FE
model. But, again, if you include the group means in a RE model, you can
get unbiased results for both with the within and between relationships.
(In a FE model, you get only the within, not the between.)
(4) If the (within) relationship between some x and y varies across groups,
then the standard errors returned by a FE model will be anticonservative
(even if Stata consecrates them as "robust"). So will the SEs returned by a
random intercepts-only RE model. Only a random-slopes RE model will return
unbiased SEs.
All of the above is discussed and demonstrated in the following three
papers (one unpublished):
http://dx.doi.org/10.1017/psrm.2014.7
http://dx.doi.org/10.1017/psrm.2013.24
www.researchgate.net/publication/299604336_Fixed_and_Random_effects_making_an_informed_choice
The first two papers are the most cited from *Political Science Research &
Methods*.
Hope that's useful,
Malcolm



> From: Paul Johnson <pauljohn32 at gmail.com>
> To: "R-SIG-Mixed-Models at r-project.org"
>         <r-sig-mixed-models at r-project.org>
> Subject: [R-sig-ME] GLMM estimation readings?
>
> I'm trying to explain GLMM estimation and defend random effects in an
> audience of econometricians.  I am focused on logit models, mostly, with
> random intercepts.
>
> The literature is difficult. I can understand applications and overviews
> like the paper by Bolker et al in Trends in Ecology and Evolution. But I
> can't understand much about glmer that is deeper than that.  Can you point
> me at some books/articles that explain the GLMM estimation process in an
> understandable way? Is there a PLS derivation for GLMM?  I want to better
> understand adaptive quadrature. And Laplace approximation.
>
> You might be able to advise me better if I tell you why I need to know.
>
> I was surprised to learn that economists hate random effects. It is almost
> visceral. For the economists, the fixed vs random effects debate is not
> philosophical, but rather practical. In an LMM, the Hausman test seems to
> bluntly reject almost all random effects models.  (See William Greene's
> Econometrics book).  Unmeasured group-level predictors always exist, it
> seems, so random effects estimates are biased/inconsistent. Even if you
> believe Intercept differences are random, LMM estimates are
> biased/inconsistent, so you should treat as fixed.
>
> I'm a little surprised there is so little discussion of Hausman's test in
> the random effect literature outside economics.
>
> One argument used by random effect advocates, that group-level predictors
> can be included in LMM, holds no weight at all. It is just evidence of bias
> in LMM. Well, the estimates thus obtained are useless because, if the group
> level intercept estimates were correct, then the group-level predictors
> would not be identifiable.
>
> My argument with them so far is based on the characterization of LMM as a
> PLS exercise, which I learned in this email list. That makes a point
> obvious: the fixed vs random models differ because PLS penalizes the b's,
> but fixed estimators do not. The issue is not "random" against "fixed". It
> is penalized against unpenalized. The parallel between LMM and ridge
> regression and LASSO helps.  If the number of observations within groups
> grows, then the posterior modes  and the fixed effect estimates converge.
> Yes?
>
> The small sample debate hinges on mean square error of the b's. The PLS
> view makes it plain that Empirical Bayes gives shrinkage not as
> afterthought (as it seems in the GLS narrative), but as a primary element
> (Henderson's estimator).   Ironically, the shrinkage effect of LMM, widely
> praised in stats and hierarchical modeling applications, raises suspicion
> of bias. One might prefer a biased, but lower variance estimate of b, and
> that's shrinkage.  That's my theme, anyway, we'll see if I can sell it.
>
> In that context, I come to the chore of comparing a glmer estimate with a
> fixed effect method known as conditional logit or Chamberlain's panel logit
> model.
>
> pj
> Paul Johnson
> http://pj.freefaculty.org
>
>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list