[R-sig-ME] p-values vs likelihood ratios

Ben Bolker bbolker at gmail.com
Tue Feb 22 04:27:14 CET 2011

On 11-02-21 10:53 AM, Manuel Spínola wrote:
> What about evidence ratios (based on Akaike's weights) as described in:
> Burnham, K. P., and D. R. Anderson.  2002. Model Selection and
> Multimodel Inference: A Practical Information-Theoretic Approach. _2nd
> edition_ Springer-Verlag, New York.
> Anderson, D. R.  (2008).  Model based inference in the life sciences:  a
> primer on evidence.  Springer, New York, NY.
> I still don't understand what are the benefits of p-values instead, for
> example, of effect size.  Many well known statisticians were very
> critical on p-values.
> See this sites against null hypothesis significant testinga and
> p-values: http://warnercnr.colostate.edu/~anderson/nester.html
> What is the use of a p-value on observational studies?
> Best,
> Manuel Spínola

  There are lots of ways to abuse p-values, but there are lots of ways
to abuse any kinds of statistics.  Burnham and Anderson have made
valuable contributions ... my problem with Akaike weights is that it's
not really clear *precisely* what they mean.  Again, I don't have an
issue with people who want to eschew hypothesis testing.  I have a
problem with the people who denounce hypothesis testing and then
re-introduce it by the back door via a method that is less well
understood (at least we have a painfully clear understanding of the
problems of p-values).

  As yet more interesting reading I suggest Stephens et al 2005, Journal
of Applied Ecology doi: 10.1111/j.1365-2664.2005.01002.x


> On 21/02/2011 08:45 a.m., Mike Lawrence wrote:
>> On Mon, Feb 21, 2011 at 9:24 AM, Ben Bolker <bbolker at gmail.com> wrote:
>>>  I don't see why you're using AIC differences here.
>> My understanding it that taking the difference of the values resulting
>> from AIC() is equivalent to computing the likelihood ratio then
>> applying the AIC correction to account for the different number of
>> parameters in each model (then log-transforming at the end).
>> My original exposure to likelihood ratios (and the AIC/BIC correction
>> thereof) comes from Glover & Dixon (2004,
>> http://www.psych.ualberta.ca/~pdixon/Home/Preprints/EasyLRms.pdf), who
>> describe the raw likelihood ratio as inappropriately favoring the
>> model with more parameters because more complex models have the
>> ability to fit noise more precisely than less complex models. Hence
>> application of some form of correction to account for the differential
>> complexity of the models being compared.
>> I wonder, however, whether cross validation might be a less
>> controversial approach to achieving fair comparison of two models that
>> differ in parameter number. That is, fit the models to a subset of the
>> data, then compute the likelihoods on another subset of the data. I'll
>> play around with this idea and report back any interesting findings...
>>>   If one is really trying to test for "evidence of an effect" I see
>>> nothing wrong with a p-value stated on the basis of the null
>>> distribution of deviance differences between a full and a reduced model
>>> - -- it's figuring out that distribution that is the hard part. If I were
>>> doing this in a Bayesian framework I would look at the credible interval
>>> of the parameters (although doing this for multi-parameter effects is
>>> harder, which is why some MCMC-based "p values" have been concocted on
>>> this list and elsewhere).
>> We'll possibly have to simply disagree on the general utility of
>> p-values for cumulative science (as opposed to one-off decision
>> making). I do, however, agree that Bayesian credible intervals have a
>> role in cumulative science insofar as they permit a means of relative
>> evaluation of models that differ not in the presence of an effect but
>> in the specific magnitude of the effect, as may be encountered in more
>> advanced/fleshed-out areas of inquiry. Otherwise, in the context of
>> areas where the simple existence of an effect is of theoretical
>> interest, computing credible intervals on effects seems like overkill
>> and have (from my anti-p perspective) a dangerously easy connection to
>> null-hypothesis significance testing.
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> -- 
> *Manuel Spínola, Ph.D.*
> Instituto Internacional en Conservación y Manejo de Vida Silvestre
> Universidad Nacional
> Apartado 1350-3000
> Heredia
> mspinola at una.ac.cr
> mspinola10 at gmail.com
> Teléfono: (506) 2277-3598
> Fax: (506) 2237-7036
> Personal website: Lobito de río
> <https://sites.google.com/site/lobitoderio/>
> Institutional website: ICOMVIS <http://www.icomvis.una.ac.cr/>

More information about the R-sig-mixed-models mailing list