# [R-sig-ME] fitting method in glmer and var for the cond. modes

Douglas Bates bates at stat.wisc.edu
Fri Jan 11 18:56:04 CET 2008

On Jan 11, 2008 4:39 AM, vito muggeo <vmuggeo at dssm.unipa.it> wrote:
> dear all,
> I 've just installed the last version of lme4.
> Thanks to prof Bates for his excellent work.

You're welcome.

> I have an issue and a question.

> 1)issue:
> I tried to fit binomial GLMM, everything works but it appears that the
> method="PQL" is not implemented..Namely the following two calls yield
> exactly the same results
> glmer(formula,family=binomial,data=d,method="Laplace")
> glmer(formula,family=binomial,data=d,method="PQL")

> The summary method on the relevant fits prints in each case (correctly?)
> Generalized linear mixed model fit by the Laplace approximation''

Yes.  After the seemingly endless task of development of the software
comes the even more seemingly endless task of documenting it.

The current scheme omits PQL iterations entirely.  For all types of
models: LMMs, GLMMs, NLMMs and GNLMMs  the ML estimates are those
obtained by direct optimization of the Laplace approximation to the
log-likelihood.  It happens that for LMMs the Laplace approximation is
exactly the log-likelihood.  Also, for those models the default
criterion is REML rather than ML but one can choose ML if desired.

I do plan to allow for optimization of the Adaptive Gauss-Hermite
Quadrature (AGQ) evaluation of the log-likelihood, when feasible.  The
Laplace approximation is a special case of AGQ, corresponding to a
single quadrature point per dimension.  Generally when we speak of AGQ
we are referring to cases of more than one quadrature point per
dimension.

The lmer/glmer/nlmer functions in the development version have an
argument "verbose".  If you want to get a better idea of how the
optimization is proceeding, set verbose = 1 (output on every
iteration) or verbose = 2 (output every second iteration), etc.  The
extraordinarily curious can set verbose = -1 and get even more output
from the penalized iteratively reweighted least squares (PIRLS)
algorithm to determine the conditional modes of the random effects at
each evaluation of the log-likelihood.

> The estimates are very similar to those coming from MASS::glmmPQL(). For
> instance the est. st.dev for the interc is 0.1613318 (via glmmPQL()) vs.
> 0.16119 (via glmer()+laplace). Since the response is binary, I expected
> the estimates to be somewhat different..

> 2)question:

> I am interested in obtaining the variances of the predictions (i.e. the
> variances of conditional modes \tilde{b}_i) from a simple LMM (fitted
> via lme or lmer)

Those can be obtained from

ranef(fm, postVar = TRUE)

Again, I may change the terminology from postVar (posterior variances)
to condVar (conditional variances) at some point because it is a
misnomer to call these posterior variances.  With non-nested grouping
factors these are incomplete in that they only give the parts of the
conditional variance-covariance matrix of the random effects that are
on or close to the diagonal.

> As far as I remember correctly, there was the bVar' slot in the early
> versions of lmer..Am I right?

The bVar slot has gone away.  The computational methods in the
development version are much simpler than in previous versions.  The
complexity is all hidden in the sparse Cholesky decomposition in the L
slot.

> Also, how I can extract the same quantities from a "lme" fits?
>
>
>
> many thanks,
> vito
>
>
> --
> ====================================
> Vito M.R. Muggeo
> Dip.to Sc Statist e Matem Vianelli'
> Università di Palermo
> viale delle Scienze, edificio 13
> 90128 Palermo - ITALY
> tel: 091 6626240
> fax: 091 485726/485612
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>