[R] Interpreting the results of Friedman test

Doerte doerte.apelt at gmx.de
Thu Apr 23 22:32:19 CEST 2009


I have problems interpreting the results of a Friedman test. It seems
to me that the p-value resulting from a Friedman test and with it the
"significance" has to be interpreted in another way than the p-value
resulting from e.g. ANOVA?

Let me describe the problem with some detail: I'm testing a lot of
different hypotheses in my observer study and only for some the
premises for performing an ANOVA are fulfilled (tested with Shapiro
Wilk and Bartlett). For the others I perform a Friedman test.

To my surprise, the p-value of the Friedman test is < 0.05 for all my
tested hypotheses. Thus, I tried to "compare" the results with the
results of an ANOVA by performing both test methods (Friedman, ANOVA)
to a given set of data.
While ANOVA results in p = 0.34445 (--> no significant difference
between the groups), the Friedman test results in p = 1.913e-06 (-->
significant difference between the groups?).

How can this be?

Or am I doing something wrong? I have three measured values for each
condition. For ANOVA I use them all, for the Friedman test I
calculated the geometric mean of the three values, since this test
does not work with replicated values. Is this a crude mistake?

Thanks in advance for any help.


More information about the R-help mailing list