[R-sig-ME] predict in lmer()?

Douglas Bates bates at stat.wisc.edu
Sun Jul 31 19:34:38 CEST 2011


On Sun, Jul 31, 2011 at 11:01 AM, David Winsemius
<dwinsemius at comcast.net> wrote:
>
> On Jul 31, 2011, at 9:41 AM, Ewart Thomas wrote:
>
>> david, i'm now slogging thru the most informative thread!
>>
>> as dennis advised, one has to do the 'predict' part 'by hand' when
>> using lmer.  this is done in the 2 lines:
>> mm = model.matrix(terms(fm1),newdat)
>> newdat$distance = mm %*% fixef(fm1)
>> good luck - this thing is worth sticking with!
>
> The ongoing quest for prediction and confidence intervals is
> chronicled in the mixed-effects Archive. The online discussion of
> prediction intervals in lme4-derived models, i.e. objects of class
> "mer" can be extracted with a Google search of the mixed-models
> Archive at the "advanced search" page:
>
> http://www.google.com/search?q=lme4+OR+mer+%22prediction+intervals%22+site%3Ahttps%3A%2F%2Fstat.ethz.ch%2Fpipermail%2Fr-sig-mixed-models%2F&hl=en&num=50
>
> The other place to look is in Bates' drafts of the book he is writing
> on lme4 methods. In chapter 1 of a 2010 draft he suggests plotting
> prediction intervals (of the random effects)  thusly:
>
> dotplot(ranef(fm1ML,postVar=TRUE))
>
> And he continues using that method for the next few chapters (during
> that year). I wonder if it makes sense to construct prediction
> intervals without proper consideration of the fact that some of the
> variability is assumed to arise from fixed effects. In his written
> material it appears that Bates studiously avoids mixing random and
> fixed effects in what he is calling "prediction intervals". There is
> some discussion of this problem in his chapter 5 of the March 2011
> material but the matrix math is too complex for me to follow and he
> has no accompanying R code to go along with it.

You're mixing two concepts.  The intervals to which you refer are not
prediction intervals on future responses. They are intervals that
contain the central 1-\alpha area under the (marginal) density curve
for each component in the conditional distribution of the
random-effects given the observed data and evaluated at the parameter
estimates.  So the random effects are an unobserved vector-valued
random vector.  The model is defined by the unconditional distribution
of the random effects and the conditional distribution of the
responses, given a value of the random effects.  From these two we can
derive the joint distribution of the random effects and the responses.
 When we condition on the observed value of the responses we get a
conditional density of the random effects, given the observed data.
For a linear mixed model this is a multivariate Gaussian distribution.
 If we look at the marginal distribution of a particular component of
the random effects vector we get a univariate Gaussian with a mean and
standard deviation that we can evaluate.  The 95% prediction interval
on a particular component of the random effects vector, given the
observed response, is this mean plus/minus 1.96 times the standard
deviation.

I know that's a mouthful and may seem pedantic but being a pedant is
the only way that I have been able to reach an understanding of these
models.  I need to trace everything back to the probability model and
keep clear in my mind what are parameters, what are random variables
and what are observed values.

I apologize to the readers of the list for continually changing the
descriptions and the underlying software.  I know this is an
inconvenience to many, such as Harald Baayen whom I will see next
week.  My understanding of the models and the available computational
methods has evolved considerably and I hope the end result is worth
the annoyance of the many changes that have gone on.  I'm just
relieved that I work for an Open Source project and don't need to
produce software under a deadline :-)


>> On Jul 31, 2011, at 6:35 AM, David Winsemius wrote:
>>
>>>
>>> On Jul 31, 2011, at 8:47 AM, Dennis Murphy wrote:
>>>
>>>> Hi:
>>>>
>>>> See http://glmm.wikidot.com/faq
>>>>
>>>> Go about 2/3 of the way down until you see the section 'Predictions
>>>> and/or confidence (or prediction) intervals on predictions'.
>>>
>>> Aren't objects created by lmer of class "mer"? Attempting to follow
>>> your advice with the first example provided in ?lmer meets with
>>> frustration:
>>>
>>> require(lme4)
>>> (fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy))
>>> newdat0 <- expand.grid(Reaction=c(200,300,400), Days=c(0,4,8),
>>> Subject=c(5,10,15) )
>>> newdat0$pred <- predict(fm1, newdat0, level = 0)
>>>
>>> Error in UseMethod("predict") :
>>>  no applicable method for 'predict' applied to an object of class
>>> "mer"
>>>
>>> I'm not even a journeyman use of ME methods, so this is just a
>>> question. Do you have a fix? (There are predict methods listed in
>>> nlme but none in the Index of lme4.
>>>
>>> --
>>> David.
>>>
>>>
>>>
>>> David Winsemius, MD
>>> West Hartford, CT
>>>
>>
>
> David Winsemius, MD
> West Hartford, CT
>
>
>        [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>




More information about the R-sig-mixed-models mailing list