# [R-sig-ME] large data set implies rejection of null?

Fredrik Nilsson laf.nilsson at gmail.com
Sun Nov 28 09:26:57 CET 2010

```No. The thing to observe is, that we are dealing with continuous
outcomes (the distribution of the mean is approximately normal
distributed) and if you compare two *different* populations then their
difference is *exactly* equal to zero with probability 0. That is if
you really have two different populations then it is improbable that
their means are exactly the same.

So this is not a guideline it's a fact. The key is that when you
increase the number of samples then you increase your precision of the
mean and when the precision is high enough then you will be able to
detect even the tiniest difference given that there is a difference
and since we know that, if the populations are different, the means
differ almost surely the result follows.

What I mean here is that if you, for instance, have two different
machines producing chewing gums and mean weight of the ones from  the
first is 5.00000... grams and the the mean from the ones from the
second machine is 5.00012..... grams then you would with large enough
samples from each machine be able to tell that there is indeed a
difference. But suppose that the standard deviation in weight per
chewing gum is 0.05 grams then the small difference in mean weight
would not influence your chewing gum experience, since the difference
between two items ,either from the same or different machines, is so
large. That's why it is relevant to think in relevant differences.
Note that I am not saying that the individual observations are
normally distributed.

Now one could say that this is frequentist mumbo jumbo since the
machines will not have a constant mean due to wear etc. But it is
important when one wishes to show that a difference is neglible e.g.
when comparing two different producers making the same pill
(equivalence trials/bioequivalence). One has to define how small
neglible is to begin with.

No statistician will say that if you compared two samples from the
same populations then their difference would be significant with
probability close to 1 if only the sample is large enough, which is
what you are trying to show when comparing dat.1 and dat.2 (their
means are exactly equal).

Best regards,
Fredrik Nilsson

2010/11/27 Daniel Ezra Johnson <danielezrajohnson at gmail.com>
>
> On 11/24/10 07:59, Rolf Turner wrote:
> > >>
> > >> It is well known amongst statisticians that having a large enough data set will
> > >> result in the rejection of *any* null hypothesis, i.e. will result in a small
> > >> p-value.
>
> This seems to be a well-accepted guideline, probably because in the
> social sciences, usually, none of the predictors truly has an effect
> size of zero.
> However, unless I am misunderstanding it, the statement appears to me
> to be more generally false.
> For example, when the population difference of means actually equals
> zero, in a t-test, very large sample sizes do not lead to small
> p-values.
>
> set.seed(1)
> n <- 1000000  # 10^6
> dat.1 <- rnorm(n/2,0,1)
> dat.2 <- rnorm(n/2,0,1)
> t.test(dat.1,dat.2,var.equal=T)
> # p = 0.60
>
> set.seed(1)
> n <- 10000000  # 10^7
> dat.1 <- rnorm(n/2,0,1)
> dat.2 <- rnorm(n/2,0,1)
> t.test(dat.1,dat.2,var.equal=T)
> # p = 0.48
>
> set.seed(1)
> n <- 100000000  # 10^8
> dat.1 <- rnorm(n/2,0,1)
> dat.2 <- rnorm(n/2,0,1)
> t.test(dat.1,dat.2,var.equal=T)
> # p = 0.80
>
> Such results - where the null hypothesis is NOT rejected - would
> presumably also occur in any experimental situations where the null
> hypothesis was literally true, regardless of the size of the data set.
> No?
>
> Daniel
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models

```