[R] Conservative "ANOVA tables" in lmer
Douglas Bates
bates at stat.wisc.edu
Wed Sep 13 17:06:49 CEST 2006
On 9/13/06, Dimitris Rizopoulos <dimitris.rizopoulos at med.kuleuven.be> wrote:
>
> ----- Original Message -----
> From: "Manuel Morales" <Manuel.A.Morales at williams.edu>
> To: <A.Robinson at ms.unimelb.edu.au>
> Cc: "Douglas Bates" <bates at stat.wisc.edu>; "Manuel Morales"
> <Manuel.A.Morales at williams.edu>; <r-help at stat.math.ethz.ch>
> Sent: Wednesday, September 13, 2006 1:04 PM
> Subject: Re: [R] Conservative "ANOVA tables" in lmer
>
>
> > On Wed, 2006-09-13 at 08:04 +1000, Andrew Robinson wrote:
> >> On Tue, September 12, 2006 7:34 am, Manuel Morales wrote:
> >> > On Mon, 2006-09-11 at 11:43 -0500, Douglas Bates wrote:
> >> >> Having made that offer I think I will now withdraw it. Peter's
> >> >> example has convinced me that this is the wrong thing to do.
> >> >>
> >> >> I am encouraged by the fact that the results from mcmcsamp
> >> >> correspond
> >> >> closely to the correct theoretical results in the case that
> >> >> Peter
> >> >> described. I appreciate that some users will find it difficult
> >> >> to
> >> >> work with a MCMC sample (or to convince editors to accept
> >> >> results
> >> >> based on such a sample) but I think that these results indicate
> >> >> that
> >> >> it is better to go after the marginal distribution of the fixed
> >> >> effects estimates (which is what is being approximated by the
> >> >> MCMC
> >> >> sample - up to Bayesian/frequentist philosophical differences)
> >> >> than to
> >> >> use the conditional distribution and somehow try to adjust the
> >> >> reference distribution.
> >> >
> >> > Am I right that the MCMC sample can not be used, however, to
> >> > evaluate
> >> > the significance of parameter groups. For example, to assess the
> >> > significance of a three-level factor? Are there better
> >> > alternatives than
> >> > simply adjusting the CI for the number of factor levels
> >> > (1-alpha/levels).
> >>
> >> I wonder whether the likelihood ratio test would be suitable here?
> >> That
> >> seems to be supported. It just takes a little longer.
> >>
> >> > require(lme4)
> >> > data(sleepstudy)
> >> > fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
> >> > fm2 <- lmer(Reaction ~ Days + I(Days^2) + (Days|Subject),
> >> > sleepstudy)
> >> > anova(fm1, fm2)
> >>
> >> So, a brief overview of the popular inferential needs and solutions
> >> would
> >> then be:
> >>
> >> 1) Test the statistical significance of one or more fixed or random
> >> effects - fit a model with and a model without the terms, and use
> >> the LRT.
> >
> > I believe that the LRT is anti-conservative for fixed effects, as
> > described in Pinheiro and Bates companion book to NLME.
> >
>
> You have this effect if you're using REML, for ML I don't think there
> is any problem to use LRT between nested models with different
> fixed-effects structure.
There are two issues here: how the test statistic is evaluated and
what reference distribution is used for the test statistic (i.e. how
do you convert a value of the test statistic to a p-value). Manuel is
addressing the latter issue. If you compare the difference of
-2*logLik for the models to a chi-square with degrees of freedom
determined by the difference in the number of parameters the test will
be anti-conservative when the number of observations is small. The
use of the chi-square as a reference distribution is based on
asymptotic properties.
The other question is how does one evaluate the likelihood-ratio test
statistic and that is the issue that Dimitris is addressing. The REML
criterion is a modified likelihood and it is inappropriate to look at
differences in the REML criterion when the models being compared have
different fixed-effects specifications, or even a different
parameterization of the fixed effects. However, the anova method for
an lmer object does not use the REML criterion even when the model has
been estimated by REML. It uses the profiled log-likelihood evaluated
at the REML estimates of the relative variances of the random effects.
That's a complicated statement so let me break it down.
The optimization in lmer is done with respect to the elements of the
variance-covariance matrix of the random effects relative to
$\sigma^2$. Given these values the conditional estimates of the
fixed-effects parameters and of $\sigma^2$ can be evaluated directly
with some linear algebra. In the summary or show output of an lmer
model there are two quantities called the MLdeviance and the
REMLdeviance. Those are based on the same relative variances but
different conditional estimates of $\sigma^2$ (and hence different
estimates of the elements of the variance-covariance of the random
effects). It turns out that there is very little difference in the
value of the profiled log-likelihood at the ML estimates and at the
REML estimates. This is not to say that the log-likelihood is similar
at the two (complete) sets of estimates - it is the profiled
log-likelihoods that are similar and these are what are used to create
the likelihood ratio test statistic, even when the models have been
fit by REML.
As I said, this is complicated and until I went to reply to this
message I hadn't really sorted it out for myself. I knew the
empirical result, which I had checked before releasing the code, but I
hadn't worked out the details of why it occurred. This discussion has
been very helpful to me at least.
If the explanation is too complicated, I suggest trying an example.
Run the code that Andrew sent and look at the logLik that is quoted in
the anova test. It is a log-likelihood not a REML criterion.
Now try
> logLik(fm1ML <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy, method = "ML"))
'log Lik.' -875.9697 (df=5)
> logLik(fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy), REML = FALSE)
'log Lik.' -875.993 (df=5)
and
> logLik(fm2ML <- lmer(Reaction ~ Days + I(Days^2) + (Days|Subject), sleepstudy, method = "ML"))
'log Lik.' -875.1408 (df=6)
> logLik(fm2 <- lmer(Reaction ~ Days + I(Days^2) + (Days|Subject), sleepstudy), REML = FALSE)
'log Lik.' -875.1588 (df=6)
Thanks to everyone for there comments on this issue.
> >> 2) Obtain confidence intervals for one or more fixed or random
> >> effects -
> >> use mcmcsamp
> >>
> >> Did I miss anything important? - What else would people like to do?
> >>
> >> Cheers
> >>
> >> Andrew
> >>
> >> Andrew Robinson
> >> Senior Lecturer in Statistics Tel:
> >> +61-3-8344-9763
> >> Department of Mathematics and Statistics Fax: +61-3-8344
> >> 4599
> >> University of Melbourne, VIC 3010 Australia
> >> Email: a.robinson at ms.unimelb.edu.au Website:
> >> http://www.ms.unimelb.edu.au
> >>
> >> ______________________________________________
> >> R-help at stat.math.ethz.ch mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> >> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >
> > ______________________________________________
> > R-help at stat.math.ethz.ch mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
> Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
>
>
More information about the R-help
mailing list