[R-meta] negative reliability

Catia Oliveira c@t|@@o||ve|r@ @end|ng |rom york@@c@uk
Fri Apr 28 02:45:07 CEST 2023


Dear all,

I apologise if I am spamming you but I think you didn't receive my previous
email. At least I was not notified.

I am running a meta-analysis on the reliability of a task (computed as a
correlation between sessions or halves of the task depending on whether it
is test-retest or split-half reliability) and I have come across one result
that I am not sure how to handle. According to the authors, they found
negative reliability and, because of that, they applied a correction
suggested by Krus and Helmstadter(1993). Thus, I am wondering if I should
use the original correlation or the corrected one. When authors applied the
Spearman-Brown correction I reverted them to the original score, but with
this one I don't know if such an approach is OK. My intuition would be to
use the uncorrected measure since that's the most common approach in the
sample and there isn't sufficient information to allow us to test the
impact of these corrections. But I would appreciate your input on this.

A second issue, but somewhat in line with the previous one, what do you
recommend one to do when multiple approaches are used to compute the
reliability of the task but only one converges with what was typically done
by other authors? I wouldn't be able to assess whether the decisions made
an impact on the reliability as it is only one study but also don't want to
bias the findings with my selection (though I have to say the results are
quite consistent across approaches).

Thank you.

Best wishes,

Catia

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list