[R-sig-ME] P value value for a large number of degree of freedom in lmer

Steven McKinney smckinney at bccrc.ca
Tue Nov 23 20:40:28 CET 2010


You have to determine what is a "real" effect - how big of a departure from the null is 
scientifically or biologically relevant (or whatever your area of interest is)?

If differences of scientific relevance are not too small, you will have enough power to
detect differences of importance.  This is the situation we all want to get to.
Enough data to declare unambiguously that a difference of relevance has been detected.

Some of your differences may be statistically significant, though the measured size may
be less than your difference of scientific relevance.  Such differences, though statistically
significant, are then not scientifically relevant.

Though it is not trivial to peg the size of scientifically relevant differences, it can be done
with some deliberation by considering a range of difference values, from ludicrously small
to ridiculously large.  Somewhere in between is a reasonable difference of relevance.



Steven McKinney

________________________________________
From: r-sig-mixed-models-bounces at r-project.org [r-sig-mixed-models-bounces at r-project.org] On Behalf Of Arnaud Mosnier [a.mosnier at gmail.com]
Sent: November 23, 2010 11:25 AM
To: Rolf Turner
Cc: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] P value value for a large number of degree of freedom in lmer

I agree but how to test that a significant result is not due to the amount
of data but by a real effect.
I though about subsetting my dataset and rerun the model X time to see if
the result still persist ... but you can also say that doing so I will
achieve to find a (small enough) size of subset at which I will not detect
the effect :-)
I also agree that the term "bias" was not correctly used ... but is there a
method to increase the confidence in those results ?

cheers,

Arnaud

2010/11/23 Rolf Turner <r.turner at auckland.ac.nz>

>
> It is well known amongst statisticians that having a large enough data set
> will
> result in the rejection of *any* null hypothesis, i.e. will result in a
> small
> p-value.  There is no ``bias'' involved.
>
>        cheers,
>
>                Rolf Turner
>
> On 24/11/2010, at 4:06 AM, Arnaud Mosnier wrote:
>
> > Dear UseRs,
> >
> > I am using a database containing nearly 200 000 observations occurring in
> 33
> > groups.
> > With a model of the form ( y ~ x + (1|group) ) in lmer, my number of
> degree
> > of freedom is really large.
> > I am wondering if this large df have an impact on the p values, mainly if
> > this could conduct to consider the effect of a variable as significant
> while
> > it is not .
> > ... and if it is the case, does it exist a correction to apply on the
> > results to take into account that bias.
> >
> > thanks !
> >
> >       [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-mixed-models at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
>

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-mixed-models at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models




More information about the R-sig-mixed-models mailing list