[R-sig-ME] [R] coef se in lme
Doran, Harold
HDoran at air.org
Thu Oct 18 15:19:49 CEST 2007
I'm particularly interested in this, so I'll bite. Let me start with
something very basic and build up later as I think more about this.
Assume we have a model such as
Y_ij = \mu + a_i + e_ij \quad a_i \sim N(0, s^2), e_ij \sim N(0, r^2)
In this basic model, we also assume the variance components are
orthogonal, denoted e_ij \bot a_i
Now, from ranef() I get the estimate of a for all i (what Doug calls the
conditional modes) and, also using the postVar argument in this
extractor function I get variances of these conditional modes.
Now, vcov() would return the variance for the fixed parameter \mu.
Given that we can find the variances for each term in the model, and
that they are assumed orthogonal, why is it that we can't simply take
the variance of a linear combination?
For example, if I want to find:
\hat{y}_ij = \mu + a_i
Then the variance of this would be simply
Var(\hat{y}_ij) = var(\mu) + var(a_i)
And in this case there is no covariance given e_ij \bot a_i, right?
Even if this scenario is too simplistic, I assume I can use a taylor
series expansion to derive something that gives me an estimate of the
variance of \hat{y}_ij in closed form that is pretty close, no?
OK, I'm ready for my lashing.
Harold
> -----Original Message-----
> From: dmbates at gmail.com [mailto:dmbates at gmail.com] On Behalf
> Of Douglas Bates
> Sent: Wednesday, October 17, 2007 4:05 PM
> To: Doran, Harold
> Cc: Irene Mantzouni; r-help at stat.math.ethz.ch; R-SIG-Mixed-Models
> Subject: Re: [R] coef se in lme
>
> On 10/15/07, Doran, Harold <HDoran at air.org> wrote:
> > ?vcov
>
> The vcov method returns the estimated variance-covariance
> matrix of the fixed-effects only. I think Irene's question
> is about the combination of the fixed-effects parameters and
> the BLUPs of the random effects that is returned by the coef
> method applied to an lmer object. (You may recall that you
> were the person who requested such a method in lme4 like the
> coef method in nlme :-)
>
> On the face of it this quantity should be easy to define and
> evaluate but in fact it is not easy to do so because these
> are combinations of model parameters (the fixed effects) and
> unobserved random variables (the random effects). It gets a
> bit tricky trying to decide what the variance of this
> combination would be. I think there is a sensible
> definition, or at least a computationally reasonable
> definition, but there are still a few slippery points in the argument.
>
> Lately I have taken to referring to the "estimates" of the
> random effects, what are sometimes called the BLUPs or Best
> Linear Unbiased Predictors, as the "conditional modes" of the
> random effects. That is, they are the values that maximize
> the density of the random effects given the observed data and
> the values of the model parameters. For a linear mixed model
> the conditional distribution of the random effects is
> multivariate normal so the conditional modes are also the
> conditional means. Also, we can evaluate the conditional
> variance-covariance matrix of the random effects up to a scale factor.
>
> The next part is where things get a bit hazy for me but I
> think it makes sense to consider the joint distribution of
> the estimator of the fixed-effects parameters and the random
> effects conditional on the data and, possibly, on the
> variance components. Conditional on the relative
> variance-covariance of the random effects (i.e. the matrix
> that occurs as the penalty term in the penalized least
> squares representation of the model) the joint distribution
> of the fixed-effects estimators and the random effects is
> multivariate normal with mean and variance-covariance matrix
> determined from the mixed-model equations.
>
> This big (p+q by p+q, where p is the dimension of the fixed
> effects and q is the dimension of the random effects)
> variance-covariance matrix could be evaluated and, from that,
> the variance of any linear combination of components.
> However, I have my doubts about whether it is the most
> sensible answer to evaluate. Conditioning on the relative
> variance-covariance matrix of the random effects is cheating,
> in a way. It would be like saying we have a known variance,
> $\sigma^2$ when, in fact, we are using an estimate. The fact
> that we don't know $\sigma^2$ is what gives rise to the t
> distributions and F distributions in linear models and we are
> all trained to pay careful attention to the number of degrees
> of freedom in that estimate and how it affects our ideas of
> the precision of the estimates of other model parameters.
> For mixed models, though, many practioners are quite
> comfortable conditioning on the value of some of the variance
> components but not others. It could turn out that
> conditioning on the relative variance-covariance of the
> random effects is not a big deal but I don't know. I haven't
> examined it in detail and I don't know of others who have.
>
> Another approach entirely is to use Markov chain Monte Carlo
> to examine the joint distribution of the parameters (in the Bayesian
> sense) and the random effects. If you save the fixed effects
> and the random effects from the MCMC chain then you can
> evaluate the linear combination of interest throughout the
> chain and get an empirical distribution of the quantities
> returned by coef.
>
> This is probably an unsatisfactory answer for Irene who may
> have wanted something quick and simple. Unfortunately, I
> don't think there is a quick, simple answer here.
>
> I suggest we move this discussion to the R-SIG-Mixed-Models
> list which I am cc:ing on this reply.
>
> > -----Original Message-----
> > From: r-help-bounces at r-project.org on behalf of Irene Mantzouni
> > Sent: Mon 10/15/2007 3:20 PM
> > To: r-help at stat.math.ethz.ch
> > Subject: [R] coef se in lme
> >
> > Hi all!
> >
> > How is it possible to estimate standard errors for coef
> obtained from lme?
> > Is there sth like se.coef() for lmer or what is the
> anaytical solution?
> >
> > Thank you!
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> >
> > [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
More information about the R-sig-mixed-models
mailing list