[R-sig-ME] Fwd: same old question - lme4 and p-values
John Maindonald
John.Maindonald at anu.edu.au
Mon Apr 7 12:47:17 CEST 2008
Real CIs?
~~~~~~~
Most application area people, and indeed many statisticians,
treat confidence intervals (i'd prefer to call them coverage
intervals, but that argument may be lost) as probability
statements about the parameter. The interpretaion
that is strictly correct does not make a lot of sense, in my
view, relative to what application area people want.
Now in fact classical intervals have (if not exact, then close
enough for all practical purposes) a Bayesian interpretation.
This interpretation has the advantage of making explicit the
assumptions that will support the interpretation of confidence
intervals as probability statements about the parameter.
While this is not the rationale for CIs that is advertised in
those texts that are careful in what they say about CIs,
I am suggesting that it is a more enlightening rationale.
Whether or not one then has a different entity seems to
me slightly academic.
I am arguing, then, that the intervals provided by
mcmcsamp() are preferable to CIs. One knows what the
prior was that led to them. I do not see why editors who
insist on p-values should not to be entirely happy with them.
They can be sold as a superior kind of CI, and p-values
that are derived by the same route are a superior kind of
p-value!
Note also; whether they are really Bayesian is a moot point.
The prior is chosen primarily for ease of calculation, and it
may be better to think of the MCMC calculation as a
mechanism for calculating an interval that in intention is not
much different from a classical CI. Douglas, is this heresy?
The demands of journals
~~~~~~~~~~~~~~~~~~~
At the end of the day, there may sometimes have to be
concessions to editorial rigidity. But let's at least try for
more accommodating approaches, noting that we can
often easily do what was not possible even a decade
ago. No-one is talking about forcing anything on
anyone, as I read the discussion.
With respect to effect estimates and SEs, surely these
are CIs, maybe 68% CIs, in different dress. They may
be preferable to CIs if effects are commonly much larger
than the relevant SE, say at least 4 times as large.
A better way?
~~~~~~~~~~
I am not committed to defending p-values or CIs, or
Bayesian rough equivalents. There's not, though, going
to be much movement until there is broad agreement on
good alternatives (or, more likely, on a smorgasbord of
good alternatives) in the statistical community, and those
alternatives are implemented in readily accessible software.
Douglas's mcmcsamp() has advanced the state of the art
for multi-level models, offering an approach that had not
previously been readily available. It is anyone's guess
where it, and statistics and graphs that it makes readily
possible, will in the course of time fit among styles of
presentation that application area people find helpful.
John Maindonald email: john.maindonald at anu.edu.au
phone : +61 2 (6125)3473 fax : +61 2(6125)5549
Centre for Mathematics & Its Applications, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.
On 7 Apr 2008, at 12:05 PM, David Henderson wrote:
> Hi John:
>
>> For all practical purposes, a CI is just the Bayesian credible
>> interval that one gets with some suitable "non-informative prior".
>> Why not then be specific about the prior, and go with the Bayesian
>> credible interval? (There is an issue whether such a prior can
>> always be found. Am right in judging this no practical consequence?)
>
>
> What? Could you explain this a little more? There is nothing
> Bayesian about a classical (i.e. not Bayesian credible set or
> highest posterior density, or whatever terminology you prefer) CI.
> The interpretation is completely different, and the assumptions used
> in deriving the interval are also different. Even though the
> interval created when using a noninformative prior is similar to a
> classical CI, they are not the same entity.
>
> Now, while i agree with the arguments about p-values and their
> validity, there is one aspect missing from this discussion. When
> creating a general use package like lme4, we are trying to create
> software that enables statisticians and researchers to perform the
> statistical analyses they need and interpret the results in ways
> that HELP them get published. While I admire Doug for "drawing a
> line in the sand" in regard to the use of p-values in published
> research, this is counter to HELPING the researcher publish their
> results. There has to be a better way to further your point in the
> community than FORCING your point upon them. Education of the next
> generation of researchers and journal editors is admittedly slow,
> but a much more community friendly way of getting your point used in
> practice.
>
> Just my $0.02...
>
> Dave H
> --
> David Henderson, Ph.D.
> Director of Community
> REvolution Computing
> 1100 Dexter Avenue North, Suite 250
> 206-577-4778 x3203
> DNADave at Revolution-Computing.Com
> http://www.revolution-computing.com
>
More information about the R-sig-mixed-models
mailing list