[R] acceptable p-level for scientific studies
Jim Lemon
bitwrit at ozemail.com.au
Fri Dec 20 00:42:03 CET 2002
Kyriakos Kachrimanis (et al.) wrote:
> I have a statistical question, that doesn't belong to this list, and I
> apologise for that in advance but I would appreciate your help very much.
> Is there some convention for selecting the a level for significance
> testing in scientific (e.g. chemical processes) studies? Most people use
> the 0.05 level but I could not find a reference to justify this. Why not
> 0.01 or 0.1? Montgomery in his book "Design and Analysis of Experiments"
> disagrees with setting a priori acceptable levels at all. Is it
> necessary to set a limit for significance testing since R can provide
> exact probability levels for the significance of each effect?
>
In general, setting arbitrary criteria for statistical significance seems
to be based upon a compromise between apparent progress (maximal
discovery) and theoretical durability (minimal disconfirmation). If we are
to build knowledge from ignorance or misapprehension, it is best to choose
methods and criteria that lead to an optimal compromise. Statistical
evaluation of data has done a much better job than rhetorical contention
as a method.
Criteria range from the apparently slack alpha=0.1 in fields where is it
difficult to discover any regularity to approximately 0.000000001 for
establishing an effect at "six sigma" where variables are apparently well
described and measurement is correspondingly precise.
In fact, what seems to happen is that researchers and reviewers find
criteria that allow them to advance, at least apparently, at a certain
rate. Thus my opinion is that a certain level of apparent progress is
psychologically necessary in research, and those in the messier areas are
willing to look a bit more foolish.
Jim
More information about the R-help
mailing list