[R-sig-ME] Fwd: same old question - lme4 and p-values

Douglas Bates bates at stat.wisc.edu
Tue Apr 8 18:53:04 CEST 2008


On Mon, Apr 7, 2008 at 9:18 PM, David Henderson <dnadavenator at gmail.com> wrote:
> Hi Doug:

> > Perhaps I should clarify.  The summary of a fitted lmer model does not
> > provide p-values because I don't know how to calculate them in an
> > acceptable way, not because I am philosophically opposed to them.  The
> > estimates and the approximate standard errors can be readily
> > calculated as can their ratio.  The problem is determining the
> > appropriate reference distribution for that ratio from which to
> > calculate a p-value.  In fixed-effects models (under the "usual"
> > assumptions) that ratio is distributed as a T with a certain number of
> > degrees of freedom.  For mixed models it is not clear exactly what
> > distribution it has - except in certain cases of completely balanced
> > data sets (i.e. the sort of data sets that occur in text books).  At
> > one time I used a T distribution and an upper bound on the degrees of
> > freedom but I was persuaded that providing p-values that could be
> > strongly "anti-conservative" is worse than not providing any.

>  Now I understand the situation better and am in agreement that this is
> clearly the right solution at this point.

> > The approach that I feel is most likely to be successful in
> > summarizing these models is first to obtain the REML or ML estimates
> > of the parameters then to run a Markov chain Monte Carlo sampler to
> > assess the variability in the parameters (or, if you prefer, the
> > variability in the parameter estimators).  (Note: I am not advocating
> > using MCMC to obtain the estimates, I suggest MCMC for assessing the
> > variability.)

>  I'm a little confused as to what is the Monte Carlo part of this scenario?
> If you perform REML or ML, theoretically it should always converge to the
> REML/ML estimates (unless you have a flat or multimodal likelihood which
> each produce other problems).  I understand you are fixing the parameter
> estimates of something at the REML/ML estimates, but what is the random
> component?

Although the MCMC chain starts at the REML/ML estimates its iterations
are of the form

 - given the current residuals, sample a new value of $\sigma$ using a
random value from a $\chi^2$ distribution
 - given the current values of $\sigma$ and the variance-covariance of
the random effects, sample new values of the fixed-effects and the
random effects (in the Bayesian formulation both the fixed-effects
parameters and the random effects are regarded as random variables).
For a locally uniform prior on the fixed-effects this stage can be
reduced to sampling from a multivariate normal distribution.
 - given the current values of $\sigma$, the fixed effects, the
variance-covariance of the random effects and the random-effects
themselves sample from the distribution of the variance-covariance
parameters.
 - repeat the above three steps n times

It is the third step that gets tricky.  The "simple" approach is to
condition only on the values of the random effects and use a Wishart
distribution.  The problem with feedback is in steps 2 and 3.  If in
step 3 you happen to get a very small value of a variance component
then the next set of random effects sampled in step 2 will be small in
magnitude, resulting in the next sample of the variance component in
step 3 being very small, resulting in ...

I enclose a script to illustrate this effect using a model fit to data
from an experiment described in the classic book "Statistical Methods
in Research and Production" edited by O.L.Davies.  The "Yield"
variable is the amount of dyestuff in five different analyses of
samples from each of six different batches.  The REML estimates for a
simple random effects model reproduce the estimates of the variance
components shown in the book (there were at least 4 editions of the
book, the first in 1947, and all contain this example).  If you run
this script yourself and look at the plot of the samples from the
Bayesian posterior distribution of the 3 parameters versus the
iteration number for the MCMC sample, you will see the places where
the value of ST1, which is the standard deviation for batches divided
by the standard deviation for analyses within batch, gets stuck at
zero.  Those places also coincide with unusually large values of sigma
and attenuated variability of the distribution of the mean parameter.

(I don't enclose the plots because even the PDF files are very large
when you are plotting 10000 samples)

>  Of course, I could always stop being lazy and just look at the source...
> ;^)
>
>
>
> > The current version of the mcmcsamp function suffers from the
> > practical problem that it gets stuck at near-zero values of variance
> > components.  There are some approaches to dealing with that.  Over the
> > weekend I thought that I had a devastatingly simple way of dealing
> > with such cases until I reflected on it a bit more and realized that
> > it would require a division by zero.  Other than that, it was a good
> > idea.
> >
>
>  At least the variance estimates were not negative... ;^)
>
>  Thanks!!
>
>
>
>  Dave H
>  --
>  David Henderson, Ph.D.
>  Director of Community
>  REvolution Computing
>  1100 Dexter Avenue North, Suite 250
>  206-577-4778 x3203
>  DNADave at Revolution-Computing.Com
>  http://www.revolution-computing.com
>
>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: Davies_R.txt
URL: <https://stat.ethz.ch/pipermail/r-sig-mixed-models/attachments/20080408/7b69f7a7/attachment.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: Davies_Rout.txt
URL: <https://stat.ethz.ch/pipermail/r-sig-mixed-models/attachments/20080408/7b69f7a7/attachment-0001.txt>


More information about the R-sig-mixed-models mailing list