[R-sig-ME] New computational aproach to estimation of mixed models

Douglas Bates bates at stat.wisc.edu
Tue Nov 11 21:19:09 CET 2008


On Tue, Nov 11, 2008 at 12:31 PM, Rubén Roa-Ureta <rroa at udec.cl> wrote:
> comRades:

> I haven't read the paper, but by reading the abstract it looks like there is
> an analytical way to estimate an approximation to the covariance structure
> of a glmm by first estimating the random effects as fixed effects in a
> conventional glm. Maybe something worth looking at to consider for lem4?

> TI: Computationally feasible estimation of the covariance structure in
> generalized linear mixed models
> AU: Alam, MD. Moudud; Carling, Kenneth
> JN: Journal of Statistical Computation and Simulation
> PD: 2008
> VO: 78
> NO: 12
> PG: 1227-1237(11)
> PB: Taylor & Francis
> IS: 0094-9655
> URL:
> http://www.ingentaconnect.com/content/tandf/gscs/2008/00000078/00000012/art00008

I too haven't read the paper and it wouldn't be wise for me to make
too many comments without doing so.  The description in the abstract
reminds me of what were called "two-stage" methods for fitting
nonlinear mixed-effects models in pharmacokinetics a couple of decades
ago.  You fit the model to each cluster then try to estimate the
variance of the random effects from the within-cluster estimates.

There are several reasons why this doesn't always work well -
unbalanced clusters, small clusters that do not support individual
parameter estimates, models with crossed or partially crossed random
effects, models with random effects corresponding to a subset of the
fixed-effects parameters, etc.

I think the general approach of trying to reduce the parameter
estimation to subproblems and somehow sew the results from the
subproblems back together loses sight of an important characteristic
which is that we want the parameter estimates to optimize a criterion,
such as the log-likelihood.  Sometimes people get hung up on the
particular algorithm and want to compare, say PQL estimates to Laplace
approximation estimates to adaptive Gauss-Hermite quadrature
estimates, etc.  In the field of nonlinear mixed-effects models there
are even more acronyms and estimation methods.  To me this misses the
point. If you want maximum likelihood estimates you should agree on
how to evaluate the likelihood then compare estimates by comparing the
likelihoods.  How you get to the estimates is not as important as
whether the likelihood at the estimates is sufficiently close to the
optimal value.

The abstract for this paper claims that standard methods for
estimating the parameters in generalized linear mixed models are too
slow then proposes a method for a relatively simple estimation
situation (large cluster sizes and few random effects per cluster).  I
haven't really encountered slow estimation progress on such cases and
I do fit such models frequently.  We are routinely analyzing 50,000
binary responses with models that have thousands of clusters defining
the random effects and 10 to 15 fixed effects parameters.  It might
take 10 to 15 minutes to fit such a model but that doesn't alarm me.
The data cleaning takes much, much longer than that.

I am working on a modification of the current lme4 that will, I hope,
speed up parameter estimation for such models.  It involves folding
the fixed-effects parameter optimization into the penalized,
iteratively-reweighted least squares calculation.

There are a whole group of techniques for obtaining estimates




More information about the R-sig-mixed-models mailing list