[R] summary: Generalized linear mixed model software
Douglas Bates
bates at stat.wisc.edu
Fri Apr 12 22:25:48 CEST 2002
Jason Liao <jg_liao at yahoo.com> writes:
> Thanks to those who responded to my inquiry about generalized linear
> mixed models on R and S-plus. Before I summarize the software, I note
> that there are several ways of doing statistical inference for
> generalized linear mixed models:
>
> (1)Standard maximum likelihood estimation, computationally intensive
> due to intractable likelihood function
>
> (2) Penalized quasi likelihood or similar approximations (Breslow &
> Clayton, 1993; Lee & Nelder, 1999, Schall 1991). This method is not
> good for binomial outcomes in particular. Theoretically, it is
> consistent when the cluster size increases. It is not consistent when
> the number of clusters increases while the size of each cluster remain
> constant. The standard MLE (1) is consistent under either increasing
> cluster size or the number of clusters. For theoretical result, see
> Lin, X. & Breslow N.E. (1996) in JASA. For empirical demonstartion, see
> Carlin et al (2001) in Biostatistics.
>
> (3)Bayesian approach. See for example, Clayton, D. G. (1996).
> Generalized linear mixed models. In Markov Chain Monte Carlo in
> Practice, Ed W.R. Gilks, S. Richardson & D.J. Spiegelhalter, pp.
> 275-301. London: Chapman and Hall.
>
> (4)REML estimation in GLMM. I will have a paper in June issue of
> Bioemtrika. REML estimation has the same asymptotic property as
> standard MLE but less bias for finite samples. Of course, one can argue
> that Bayesian formulation automatically generates REML for variance
> components.
>
> Now the software.
>
> PQL and variations: GLME in beta by José Pinheiro, reglm by Gordon
> Smyth, glmmPQL in package MASS by Venables & Ripley
>
> Bayesian method: GLMMgibbs and BUGS (not a R or S-plus package).
>
> Standard MLE: glmm (only a random intercept) in one of Jim Lindsey's
> packages?
>
> Ok, this still leaves us without a good way of doing standard MLE. I
> have programmed standard MLE using R. It takes 6 minutes for a
> clustered binary outcome dataset, with 50 clusters and each cluster
> with 10 binary responses, 6 fixed effects and two random effects for
> each cluster.
Good summary. I have a couple of comments to add.
Stating that a method is penalized quasi-likelihood (PQL) does not
give a full description. There are different algorithms that can be
derived from penalized quasi-likelihood. Prof. Ripley's approach in
glmmPQL is elegant and effective. He uses that same iteratively
reweighted least squares approach used with GLMs except that the
iterations are with respect to a weighted linear mixed-effects
estimation problem rather than a weighted least squares problem.
Different implementations of PQL for GLMMs will yield different
estimates. The same is true for maximum likelihood approaches that
use numerical quadrature. As you said, determining MLEs for GLMMs
requires optimization of a criterion that requires evaluation of an
intractable integral. Doing this reliably while allowing easy and
flexible specification of the model is difficult.
I have been looking at some comparisons recently using simulated data
described in Rodriguez and Goldman (JRSS-A, 1995) (see also Rodriguez
and Goldman, JRSS-A, 2001). Some of these data are available at
http://data.princeton.edu/multilevel/
Prof. Rodriguez has also used these data to compare methods that
have been implemented since the 1995 paper was written. He may be
willing to share these results.
I can provide you with the results of those fits if you are
interested. I have created an R package from the simulated data sets.
One of the tests run by R CMD check will fit each of the data sets
using glmmPQL. It takes about 45 minutes for the 100 simulations on a
1.2 GHz desktop computer running Linux.
When evaluating the MLEs it is important to state how the numerical
quadrature is being done, because it influence the results. I like
adaptive methods that determine the conditional modes of the random
effects and center an evaluation grid around them. Such methods
require optimization within quadrature within optimization so the
programming is complex.
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._
More information about the R-help
mailing list