# [R-sig-ME] GLMM-Implementation question

Douglas Bates bates at stat.wisc.edu
Thu Jun 11 18:06:28 CEST 2009

On Thu, Jun 11, 2009 at 10:38 AM, Fabian
Scheipl<Fabian.Scheipl at stat.uni-muenchen.de> wrote:
> Never mind me, the answer to the second question is:
> update_u (called by update_dev) iteratively updates the orthonormalized
> random effects until convergence each time before S_nlminb_iterate is
> called.

Yes.  I was about to write that.

> On Thu, Jun 11, 2009 at 5:17 PM, Fabian Scheipl <
> Fabian.Scheipl at stat.uni-muenchen.de> wrote:

>> Dear List,
>>
>> Preparing the slides for a lecture on GLMMs I'm giving next week I noticed
>> that I don't quite understand eq. 40 (Laplace Approximation of a
>> GLMM-Likelihood) in the Implementation-vignette for lme4.
>> I would be extremely grateful  (my students as well, of course, but
>> probably significiantly less so ;) ) if somebody would find the time to
>> offer his/her thoughts on some of the following points:
>>
>> I do not understand how the expression that is exponentiated  in the second
>> line is equivalent to the quadratic Taylor-approximation of the penalized
>> log-likelihood around the conditional mode \tilde b.
>>
>> AFAIU the first two terms in the sum is just the penalized log-likelihood
>> evaluated at the conditional modes \tilde b. The first term is the
>> likelihood of y conditional on fixed and random effects, the second is
>> equivalent to \tilde b ' G^{-1} \tilde b.
>> [should be \tilde b^\star ' \tilde b^\star instead of  \tilde b ' \tilde
>> b^\star, I think? Also, there are two plus signs after that].

I would have to go back and look at that document more carefully to be
able to answer these questions.  Unfortunately I am in "crunch mode"
on another project right now and I don't think I will be able to free
up the time today.

>> The next term should probably read (b^- \tilde b) ' D^-1 (b - \tilde b),
>> the quadratic term in the Taylor-approximation.
>> What bothers me is that, as D is defined in eq. 39 [ which should define
>> Var(b^\star|...), not Var(b|...) ],
>> it is the inverse of the expected Fisher-Info for b^\star, not the observed
>> - e.g. we are not using an expression for the second derivative but for its
>> expectation -  doesn't that make a difference and is what we are doing still
>> a Laplace-Approximation in the conventional sense?
>>
>> My second question: I got lost in the source for lmer - in which function
>> called by ST_setPars  do the PIRLS-updates for b happen and do we actually
>> do Fisher-scoring until convergence for b every time we update \beta and
>> \theta or is it just a single fisher-scoring step on b before each call to
>> S_nlminb_iterate?
>>
>> Best Wishes,
>> Fabian Scheipl
>>
>>
>>
>
>        [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>