# [R-sig-ME] Is multiple-hypothesis-testing correction needed when the goal is decision making?

René b|mono@om @end|ng |rom gm@||@com
Wed Mar 27 17:46:07 CET 2019

```Hi P,
there are several different aspects in your question. Most importantly,
your question is not a frequentist question, but a Bayesian question :))

1. if you are asking for "evidence" you are basically asking for a Bayes
Factor (evidence for a hypothesis given some data), but not a frequentist
p-value (probability of data given a hypothesis). Also,a p-value does not
confirm -a hypothesis- ( or in the most generous frequentist interpretation
does not provide evidence -for- something), but (always and only) falsifies
the Null hypothesis (e.g. if p<.05 then we decide as if the Null hypothesis
is false -> falsified; this does not tell us whether the alternative
hypothesis is true). Thus, if you want to use p-values, you indeed could
say if p2 and p3 >.05, the corresponding Null Hypotheses have not been
falsified, but it was falsified for p1. Thus, there is 1 falsification and
two inconclusive tests (i.e. p>.05 without a priori Power analyses is
inconclusive). And if we take the idea of "scientific laws" seriously, then
a falsification (-> p1 Null hypothesis) means that the "law" (-> a = b) is
not a "law", because laws do not have exceptions, therefore a is not equal
to b. Thus, you can decide as if a is unequal to b. Please note, this would
be ideal and valid, but very few scientists really act like this, because
they want answers to the non-frequencist questions like yours... But if we
are doing this right now, then, if you now want to generalize this
conclusion across different dependent variables, you may well earn some
criticism for this, because, you falsified a 'law' for "attention to
vibrations", not for (e.g.) "differentiating vibrations". Nonetheless, you
could do so, which simply means, that you implicitly make the argument that
"attention to vibrations" necessarily (but not sufficiently) precedes
"differentiating vibrations" (...), same for synchronization. This
reasoning could go on... which means. If you want to make a sound statement
about whether a>b (for all or some of the DVs) do it Bayesian.

2. So, if you are asking for "evidence" that a > b then you can quantify
this, indeed, with a Bayes Factor (apart from any p-value), if you estimate
the means of a and b separately (and in a proper way; check out the brms
package in R, or maybe anovaBF with random variables should do as well,
depending on the complexity of your design). Get the posterior samples for
a and b from the model output; then subtract the posterior samples of 'b'
from those of 'a' (i.e. a values >0 means that the estimates are a>b;
subtraction has to be made for each iteration of the sampling), then you
have a number of b-a differences, and you count how often it this is that
'b-a>0' and divide it by the count of how often this is that 'b-a< 0', and
this ratio is the evidence that a>b, versus b<a. (Note this is not the same
as testing "a-b=0", which uses a different method). You can not get closer
to the answer of your question :))

Hope this helps
Oh... almost forgot:
3. You could do this in a multivariate mixed model (i.e. testing a>b for
all three DV's simultaneously, which is possible with the brms package;
then following the steps above, or simply use the hypothesis() package
afterwards which does this for you ;)).

Best, René

Am Mi., 27. März 2019 um 16:42 Uhr schrieb Pardis Miri <parism using stanford.edu
>:

> Dear forum,
>
> Based on frequentist approach.
> Suppose we have an IV of body site with two levels a and b. We have three
> DVs dv1, dv2, and dv3.  For example, how well the participant attends to
> the vibrations, how well the participant differentiate the vibrations, and
> how well the participant synchronizes their breathing with the vibrations.
> We have three hypotheses: a > b for dv1, dv2, and dv3. We run a mixed
> model for each DV. Each results in p-values p1, p2, and p3. We know that if
> we want to say that all three hypotheses are true, we multiply each p-value
> by 3 (eg, 3*p1 ,3*p2, and 3*p3) and test if each is less than 0.05.
>
> However, I am instead looking for some evidence - a recommendation - that
> site a is better than site b. In this case, can we simply say yes if at
> least one of the p-values is < 0.05? That is, if p1 < 0.05 but p2 and p3
> are both > 0.05, we can conclude that the dv1 hypothesis shows evidence,
> but the two other hypotheses are inconclusive.
>
> Thank you all!
> P
>
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>

[[alternative HTML version deleted]]

```