[R-sig-ME] [R] lmer - BLUP prediction intervals
Emmanuel Curis
curis at pharmacie.univ-paris5.fr
Wed Feb 6 09:39:24 CET 2013
Hello,
I think it also depends on what kind of prediction you want to make,
assuming continuous predictors --- at least if I well understood the
mixed effect models ideas...
For instance, imagine you fit a model on several patients for let's
say drug concentration with time, assuming patient as a random effect
on concentration.
If you want to use the model to predict for a new patient, the
approach of adding the variance component of the patient random effect
to the variance of the residuals should work, I guess.
However, if you want to use the model to predict concentrations for
one of the patients already in the model, for a new administration for
instance or intermediate times or..., I think the use of its random
effect « estimate » would be more suited. I think this is more in this
second case that the concern between ranef outputs and estimates «
correlations » occurs, no? In the previous one, this would be the
correlation between the estimate of the residual & random effect
variances and the fixed effects estimates that would matter --- Am I
wrong saying that all of these are estimates, including the variance
and eventually covariance terms?
Best regards,
On Wed, Feb 06, 2013 at 01:15:17PM +1100, Andrew Robinson wrote:
« I think that it is a reasonable way to proceed just so long as you
« interpret the intervals guardedly and document your assumptions carefully.
«
« Cheers
«
« Andrew
«
« On Wednesday, February 6, 2013, Daniel Caro wrote:
«
« > Dear all
« >
« > I have not been able to follow the discussion. But I would like to
« > know if it makes sense to calculate prediction intervals like this:
« >
« > var(fixed effect+random effect)= var(fixed effect) + var(random
« > effect) + 0 (i.e., the cov is zero)
« >
« > and based on this create the prediction intervals. Does this make sense?
« >
« > All the best,
« > Daniel
« >
« > On Tue, Feb 5, 2013 at 8:54 PM, Douglas Bates <bates at stat.wisc.edu> wrote:
« > > On Tue, Feb 5, 2013 at 2:14 PM, Andrew Robinson <
« > > A.Robinson at ms.unimelb.edu.au> wrote:
« > >
« > >> I'd have thought that the joint correlation matrix would be of the
« > >> estimates of the fixed effects and the random effects, rather than the
« > >> things themselves.
« > >>
« > >
« > > Well, it may be because I have turned into a grumpy old man but I get
« > picky
« > > about terminology and the random effects are not parameters - they are
« > > unobserved random variables. They don't have "estimates" in the sense of
« > > parameter estimates. The quantities returned by the ranef function are
« > the
« > > conditional means (in the case of a linear mixed model, conditional modes
« > > in general) of the random effects given the observed data evaluated with
« > > the parameters at their estimated values. In the Bayesian point of view
« > > none of this is problematic because they're all random variables but
« > > otherwise I struggle with the interpretation of how these can be
« > considered
« > > jointly. If you want to consider the distribution of the random effects
« > > you need to have known values of the parameters.
« > >
« > >
« > >> The estimates are statistical quantities, with specified distributions,
« > >> under the model. The model posits these different roles (parameter,
« > random
« > >> variable) for the quantities that are the targets of the estimates, but
« > the
« > >> estimates are just estimates, and as such, they have a correlation
« > >> structure under the model, and that correlation structure can be
« > estimated.
« > >>
« > >> An imperfect analogy from least-squares regression is the correlation
« > >> structure of residual estimates, induced by the model. We say that the
« > >> errors are independent, but the model creates a (modest) correlation
« > >> structure than can be measured, again, conditional on the model.
« > >>
« > >
« > > Well the residuals are random variables and we can show that at the least
« > > squares estimates of the parameters they will have a known Gaussian
« > > distribution which, it turns out, doesn't depend on the values of the
« > > coefficients. But those are the easy cases. In the linear mixed model
« > we
« > > still have a Gaussian distribution and a linear predictor but that is for
« > > the conditional distribution of the response given the random effects.
« > For
« > > the complete model things get much messier.
« > >
« > > I'm not making these points just to be difficult. I have spent a lot of
« > > time thinking about these models and trying to come up with a coherent
« > way
« > > of describing them. Along the way I have come to the conclusion that the
« > > way these models are often described is, well, wrong. And those
« > > descriptions include some that I have written. For example, you often
« > see
« > > the model described as the linear predictor for an observation plus a
« > > "noise" term, epsilon, and the statement that the distribution of the
« > > random effects is independent of the distribution of the noise term. I
« > now
« > > view the linear predictor as a part of the conditional distribution of
« > the
« > > response given the random effects so it wouldn't make sense to talk about
« > > these distributions being independent. The biggest pitfall in
« > transferring
« > > your thinking from a linear model to any other kind (GLM, LMM, GLMM) is
« > the
« > > fact that we can make sense of a Gaussian distribution minus its mean so
« > we
« > > write the linear model in the "signal plus noise" form as Y = X\beta +
« > > \epsilon where Y is an n-dimensional random variable, X is the n by p
« > model
« > > matrix, \beta is the p-dimensional vector of coefficients and \epsilon is
« > > an n-dimensional Gaussian with mean zero. That doesn't work in the other
« > > cases, despite the heroic attempts of many people to write things in that
« > > way.
« > >
« > > Here endeth the sermon.
« > >
« > >
« > >> On Wed, Feb 6, 2013 at 6:34 AM, Douglas Bates <bates at stat.wisc.edu>
« > wrote:
« > >>
« > >>> It is possible to create the correlation matrix of the fixed effects
« > and
« > >>> the random effects jointly using the results from lmer but I have
« > >>> difficulty deciding what this would represent statistically. If you
« > adopt
« > >>> a B
«
«
«
« --
« Andrew Robinson
« Director (A/g), ACERA
« Senior Lecturer in Applied Statistics Tel:
« +61-3-8344-6410
« Department of Mathematics and Statistics Fax: +61-3-8344 4599
« University of Melbourne, VIC 3010 Australia
« Email: a.robinson at ms.unimelb.edu.au Website: http://www.ms.unimelb.edu.au
«
« FAwR: http://www.ms.unimelb.edu.au/~andrewpr/FAwR/
« SPuR: http://www.ms.unimelb.edu.au/spuRs/
«
« [[alternative HTML version deleted]]
«
« _______________________________________________
« R-sig-mixed-models at r-project.org mailing list
« https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
--
Emmanuel CURIS
emmanuel.curis at parisdescartes.fr
Page WWW: http://emmanuel.curis.online.fr/index.html
More information about the R-sig-mixed-models
mailing list