[R-sig-ME] Reconciling Near Identical AIC Values and Highly Significant P Value
Ben Bolker
bbolker at gmail.com
Mon Apr 14 18:25:07 CEST 2014
On 14-04-14 10:08 AM, AvianResearchDivision wrote:
> Hi,
>
> Thank you for your response. How then would you proceed with testing two
> models that are not nested within each other? I suppose I could compare
> AIC values, but is it correct to use the AIC values obtained using my
> initial method of anova(model2,model)?
>
> Thank you,
> Jacob
I think you might have to back up and think about what hypothesis
you're testing when you're comparing two non-nested models. You could
consider Vuong's test
http://en.wikipedia.org/wiki/Vuong%27s_closeness_test ;
http://fisher.osu.edu/~schroeder.9/AMIS900/Vuong1989.pdf ...
alternatively, I do think comparing AICs makes sense. AIC(model,model2)
will just give you a list of AIC values. bbmle::AICtab(model,model2)
will give you a slightly prettier output.
Keep the various limitations of AIC (asymptotic; assumes internal
points -- see http://glmm.wikidot.com/faq) in mind too.
Ben Bolker
>
>
> On Sun, Apr 13, 2014 at 3:06 PM, Emmanuel Curis <
> emmanuel.curis at parisdescartes.fr> wrote:
>
>> Hi,
>>
>> I may completely misunderstand your problem, but if you replace a
>> predictor variable by another one with the same number of parameters
>> (let's say, for instance, « length » by « surface » in a linear
>> regression), then
>>
>> 1) comparing AIC and likelihoods value is the same since the number
>> of parameters does not change ;
>> hence same AIC <=> predictors give equally good fits
>>
>> 2) chi-square tests is meaningless, since models are not nested.
>> Here, I guess you have the very low p because the function uses a
>> 0-degrees of freedom [same number of parameters...] khi-square,
>> that is the constant 0, and any value other than 0 as a null
>> probability, hence p = 0 < whatever you want...
>>
>> From 2), it results than comparing AIC is, between your two options,
>> the only one valid when models are not nested.
>>
>> For the second point, I don't know.
>>
>> Hope this helps,
>>
>> On Sun, Apr 13, 2014 at 02:52:20PM -0400, AvianResearchDivision wrote:
>> « Hi all,
>> «
>> « When comparing identical models (only difference in predictor variable;
>> « same d.f.) in lme4' using anova(model2,model), sometime I see nearly
>> « identical AIC values like model2=1479.6 and model=1479.5 and a very low
>> chi
>> « sq. value like 0.1062, yet an extremely low p-value of <0.0001. How
>> would
>> « you reconcile this? Should we be more concerned with looking for
>> « differences in AIC values of >3 when determing a better fit model, rather
>> « than looking at a p-value?
>> «
>> « Secondly, I read on the glmm.wikidot.com/faq page that when testing for
>> the
>> « significance of random effects, p values are conservative and are roughly
>> « half what is returned when performing LRTs. Do you find that what
>> Pinheiro
>> « and Bates (2000) states is sufficient to justify reporting the
>> significance
>> « of random effects when reported p values are between 0.05 and 0.10? And
>> is
>> « it enough to convince you that is the case, especially when examining the
>> « raw data with this in mind?
>> «
>> « Thank you,
>> « Jacob
>> «
>> « [[alternative HTML version deleted]]
>> «
>> « _______________________________________________
>> « R-sig-mixed-models at r-project.org mailing list
>> « https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>> --
>> Emmanuel CURIS
>> emmanuel.curis at parisdescartes.fr
>>
>> Page WWW: http://emmanuel.curis.online.fr/index.html
>>
>
> [[alternative HTML version deleted]]
>
>
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list