[R] coef se in lme
Irene Mantzouni
ima at difres.dk
Fri Nov 2 10:04:17 CET 2007
Thank you very much for the reply (and hopefully I am replying back in the proper way).
Do you think the delta method would be an acceptable way to estimate approximate confidence intervals for the resulting group specific coefficients (combining fixed effects and BLUPS)?
Regarding the MCMC related approach, how is it possible to save the fixed and the
random effects from the MCMC chain? Can this be implemented through nlme library or is there some more clear cut way (I wish I had a strong statistical background and abilities but... :)) to evaluate the empirical distribution of a parameter that is linear combination of these quantities?
All the best,
Irene
________________________________
Από: dmbates at gmail.com εκ μέρους Douglas Bates
Αποστολή: Τετ 17/10/2007 10:04 μμ
Προς: Doran, Harold
Κοιν.: Irene Mantzouni; r-help at stat.math.ethz.ch; R-SIG-Mixed-Models
Θέμα: Re: [R] coef se in lme
On 10/15/07, Doran, Harold <HDoran at air.org> wrote:
> ?vcov
The vcov method returns the estimated variance-covariance matrix of
the fixed-effects only. I think Irene's question is about the
combination of the fixed-effects parameters and the BLUPs of the
random effects that is returned by the coef method applied to an lmer
object. (You may recall that you were the person who requested such a
method in lme4 like the coef method in nlme :-)
On the face of it this quantity should be easy to define and evaluate
but in fact it is not easy to do so because these are combinations of
model parameters (the fixed effects) and unobserved random variables
(the random effects). It gets a bit tricky trying to decide what the
variance of this combination would be. I think there is a sensible
definition, or at least a computationally reasonable definition, but
there are still a few slippery points in the argument.
Lately I have taken to referring to the "estimates" of the random
effects, what are sometimes called the BLUPs or Best Linear Unbiased
Predictors, as the "conditional modes" of the random effects. That
is, they are the values that maximize the density of the random
effects given the observed data and the values of the model
parameters. For a linear mixed model the conditional distribution of
the random effects is multivariate normal so the conditional modes are
also the conditional means. Also, we can evaluate the conditional
variance-covariance matrix of the random effects up to a scale factor.
The next part is where things get a bit hazy for me but I think it
makes sense to consider the joint distribution of the estimator of the
fixed-effects parameters and the random effects conditional on the
data and, possibly, on the variance components. Conditional on the
relative variance-covariance of the random effects (i.e. the matrix
that occurs as the penalty term in the penalized least squares
representation of the model) the joint distribution of the
fixed-effects estimators and the random effects is multivariate normal
with mean and variance-covariance matrix determined from the
mixed-model equations.
This big (p+q by p+q, where p is the dimension of the fixed effects
and q is the dimension of the random effects) variance-covariance
matrix could be evaluated and, from that, the variance of any linear
combination of components. However, I have my doubts about whether it
is the most sensible answer to evaluate. Conditioning on the relative
variance-covariance matrix of the random effects is cheating, in a
way. It would be like saying we have a known variance, $\sigma^2$
when, in fact, we are using an estimate. The fact that we don't know
$\sigma^2$ is what gives rise to the t distributions and F
distributions in linear models and we are all trained to pay careful
attention to the number of degrees of freedom in that estimate and how
it affects our ideas of the precision of the estimates of other model
parameters. For mixed models, though, many practioners are quite
comfortable conditioning on the value of some of the variance
components but not others. It could turn out that conditioning on the
relative variance-covariance of the random effects is not a big deal
but I don't know. I haven't examined it in detail and I don't know of
others who have.
Another approach entirely is to use Markov chain Monte Carlo to
examine the joint distribution of the parameters (in the Bayesian
sense) and the random effects. If you save the fixed effects and the
random effects from the MCMC chain then you can evaluate the linear
combination of interest throughout the chain and get an empirical
distribution of the quantities returned by coef.
This is probably an unsatisfactory answer for Irene who may have
wanted something quick and simple. Unfortunately, I don't think there
is a quick, simple answer here.
I suggest we move this discussion to the R-SIG-Mixed-Models list which
I am cc:ing on this reply.
> -----Original Message-----
> From: r-help-bounces at r-project.org on behalf of Irene Mantzouni
> Sent: Mon 10/15/2007 3:20 PM
> To: r-help at stat.math.ethz.ch
> Subject: [R] coef se in lme
>
> Hi all!
>
> How is it possible to estimate standard errors for coef obtained from lme?
> Is there sth like se.coef() for lmer or what is the anaytical solution?
>
> Thank you!
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
More information about the R-help
mailing list