Dear List,
Preparing the slides for a lecture on GLMMs I'm giving next week I noticed
that I don't quite understand eq. 40 (Laplace Approximation of a
GLMM-Likelihood) in the Implementation-vignette for lme4.
I would be extremely grateful (my students as well, of course, but probably
significiantly less so ;) ) if somebody would find the time to offer his/her
thoughts on some of the following points:
I do not understand how the expression that is exponentiated in the second
line is equivalent to the quadratic Taylor-approximation of the penalized
log-likelihood around the conditional mode \tilde b.
AFAIU the first two terms in the sum is just the penalized log-likelihood
evaluated at the conditional modes \tilde b. The first term is the
likelihood of y conditional on fixed and random effects, the second is
equivalent to \tilde b ' G^{-1} \tilde b.
[should be \tilde b^\star ' \tilde b^\star instead of \tilde b ' \tilde
b^\star, I think? Also, there are two plus signs after that].
The next term should probably read (b^- \tilde b) ' D^-1 (b - \tilde b), the
quadratic term in the Taylor-approximation.
What bothers me is that, as D is defined in eq. 39 [ which should define
Var(b^\star|...), not Var(b|...) ],
it is the inverse of the expected Fisher-Info for b^\star, not the observed
- e.g. we are not using an expression for the second derivative but for its
expectation - doesn't that make a difference and is what we are doing still
a Laplace-Approximation in the conventional sense?
My second question: I got lost in the source for lmer - in which function
called by ST_setPars do the PIRLS-updates for b happen and do we actually
do Fisher-scoring until convergence for b every time we update \beta and
\theta or is it just a single fisher-scoring step on b before each call to
S_nlminb_iterate?
Best Wishes,
Fabian Scheipl
[[alternative HTML version deleted]]