# [R-sig-ME] [R] coef se in lme

David Duffy David.Duffy at qimr.edu.au
Fri Oct 19 04:06:37 CEST 2007

```On Thu, 18 Oct 2007, Douglas Bates wrote:

> On 10/18/07, dave fournier <otter at otter-rsch.com> wrote:
>
>> In the AD Model Builder Random Effects package we provide estimated
>> standard deviations for any function of the fixed and random effects,
>> (here I include the parameters which detemine the covarince matrices if
>> present) and the random effects. This is for general nonlinear random
>> effects models, but the calculations can be used for linear models as
>> well. We calculate these estimates as follows. Let L(x,u)
>> be the log-likelihood function for the parameters x and u given the
>> observed data,
>> where u is the vector of random effects and x is the vector of the other
>> parameters.
>
> I know it may sound pedantic but I don't know what a log-likelihood
> L(x,u) would be because you are treating parameters and the random
> effects as if they are the same type of object and they're not.  If
>
>> Let F(x) be the log-likelihood for x after the u have been
>> integrated out. This integration might be exact or more commonly via the
>> Laplace approximation or something else.
>> For any x let uhat(x) be the value of u which maximizes L(x,u),
>
> I think that is what I would call the conditional modes of the random
> effects.  These depend on the observed responses and the model
> parameters.
>
>> and let xhat be the value of x which maximizes F(x).
>
>> The estimate for the covariance matrix for the x is then
>> S_xx = inv(F_xx) and the estimated full covariance matrix Sigma for the
>> x and u is given by
>
>> S_xx                 S_xx * uhat_x
>> (S_xx * uhat_x)' uhat' * S_xx * uhat_x + inv(L_uu)
>
>> where ' denotes transpose _x denotes first derivative wrt x (note that
>> uhat is a function of x so that uhat_x makes sense) and _xx _uu denote
>> the second derivatives wrt x and u. we then use Sigma and the delta
>> method to estimate the standard deviation of any (differentiable)
>> function of x and u.
>
[Snip]
>
> Can you give a bit more detail on how you justify mixing derivatives
> of the marginal log-likelihood (F) with derivatives of the conditional
> density (L).  Do you know that these are on the same scale?  I'm
> willing to believe that they are - it is just that I can't see right
> off why they should be.
>

I find all of this is a bit above my head, but I do have a paper by Matt
Wand _Fisher information for generalised linear mixed models_
Journal of Multivariate Analysis, (2007), 98, 1412-1416.

This looks at the canonical-link GLMM where the random effects are
Gaussian for a simple one-level/random intercepts model, and
ends rather abruptly ;) with

"Remark 2. Approximate standard errors for the maximum likelihood
estimates beta-hat and sigma-squared-hat can be obtained from the
diagonal entries of I(b-hat,s-hat2)^-1. However, as pointed out in Remark 1,
implementation is often hindered by intractable multivariate integrals.
Additionally, dependence among the entries of y induced by u means that
central limit theorems of the type: I (bhat,shat2)^-1 {(bhat,shat2)-(b,s2)
2)} converges in distribution to a N (0, I) random vector, have not been
established in general and, hence, interpretation of standard errors
is cloudy. Nevertheless, there are many special cases, such as
m-dependence when the data are from a longitudinal study, for which
central limit theorems can be established."

You'll have to read the paper which gives the derivation and formulae.

David Duffy.
--
| David Duffy (MBBS PhD)                                         ,-_|\
| email: davidD at qimr.edu.au  ph: INT+61+7+3362-0217 fax: -0101  /     *
| Epidemiology Unit, Queensland Institute of Medical Research   \_,-._/
| 300 Herston Rd, Brisbane, Queensland 4029, Australia  GPG 4D0B994A v

```