[R-sig-ME] lmer and p-values
Ben Bolker
bbolker at gmail.com
Mon Mar 28 23:18:08 CEST 2011
On 03/28/2011 01:04 PM, Iker Vaquero Alba wrote:
>
> Ok, I have had a look at the mcmcsamp() function. If I've got it
> right, it generates an MCMC sample from the parameters of a model fitted
> preferentially with "lmer" or similar function.
>
> But my doubt now is: even if I cannot trust the p-values from the
> ANOVA comparing two different models that differ in a term, is it still
> OK if I simplify the model that way until I get my Minimum Adequate
> Model, and then I use mcmcsamp() to get a trustable p-value of the terms
> I'm interested in from this MAM, or should I directly use mcmcsamp()
> with my Maximum model and simplify it according to the p-values obtained
> with it?
>
> Thank you. Iker
Why are you simplifying the model in the first place? (That is a real
question, with only a tinge of prescriptiveness.) Among the active
contributors to this list and other R lists, I would say that the most
widespread philosophy is that one should *not* do backwards elimination
of (apparently) superfluous/non-significant terms in the model. (See
myriad posts by Frank Harrell and others.)
If you do insist on eliminating terms, then the LRT (anova()) p-values
are no more or less reliable for the purposes of elimination than they
are for the purposes of hypothesis testing.
>
> --- El *lun, 28/3/11, Ben Bolker /<bbolker at gmail.com>/* escribió:
>
>
> De: Ben Bolker <bbolker at gmail.com>
> Asunto: Re: [R-sig-ME] lmer and p-values
> Para: r-sig-mixed-models at r-project.org
> Fecha: lunes, 28 de marzo, 2011 18:27
>
> Iker Vaquero Alba <karraspito at ...> writes:
>
> >
> >
> > Dear list members:
> >
> > I am fitting a model with lmer, because I need to fit some nested
> > as well as non-nested random effects in it. I am doing a split plot
> > simplification, dropping terms from the model and comparing the
> models with or
> > without the term. When doing and ANOVA between one model and its
> simplified
> > version, I get, as a result, a chisquare value with 1 df (df from
> the bigger
> > model - df from the simplified one), and a p-value associated.
> >
> > I was just wondering if it's correct to present this chisquare and
> > p values as a result of testing the effect of a certain term in
> the model. I am
> > a bit confused, as if I was doing this same analysis with lme, I
> would be
> > getting F-values and associated p-values.
> >
>
> When you do anova() in this context you are doing a likelihood ratio
> test, which is equivalent to doing an F test with 1 numerator df and
> a very large (infinite) denominator df.
> As Pinheiro and Bates 2000 point out, this is
> dangerous/anticonservative
> if your data set is small, for some value of "small".
> Guessing an appropriate denominator df, or using mcmcsamp(), or
> parametric
> bootstrapping, or something, will be necessary if you want a more
> reliable p-value.
>
> _______________________________________________
> R-sig-mixed-models at r-project.org
> </mc/compose?to=R-sig-mixed-models at r-project.org> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list