[R-meta] negative reliability

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Fri Apr 28 12:43:57 CEST 2023

I am also not familiar with the correction by Krus and Helmstadter (1993) (seems to be this article in case anybody is interested: https://doi.org/10.1177/0013164493053003005) so I cannot really comment on this. I think in the end, you just have to make your own decisions here (unless somebody comes with further wisdom) and document your choices. As always, a sensitivity analysis is also an option.

P.S.: Your message also arrived in my Inbox (if you recall from our previous communication, this was an issue in the past, but with the adjusted settings to the mailing list, this now seems to be resolved).


>-----Original Message-----
>From: Michael Dewey [mailto:lists using dewey.myzen.co.uk]
>Sent: Friday, 28 April, 2023 11:01
>To: R Special Interest Group for Meta-Analysis; Viechtbauer, Wolfgang (NP); James
>Cc: Catia Oliveira
>Subject: Re: [R-meta] negative reliability
>Dear Catia
>You can check whether it was transmitted by going to
>Where it appears.
>The fact that you got no response may be because we are all struggling
>with the idea of a test-retest or split-half reliability estimate which
>was negative and what we would do with it. So people who scored high the
>first time now score low? If it is split-half it suggests that the
>hypothesis that the test measures one thing is false.
>On 28/04/2023 01:45, Catia Oliveira via R-sig-meta-analysis wrote:
>> Dear all,
>> I apologise if I am spamming you but I think you didn't receive my previous
>> email. At least I was not notified.
>> I am running a meta-analysis on the reliability of a task (computed as a
>> correlation between sessions or halves of the task depending on whether it
>> is test-retest or split-half reliability) and I have come across one result
>> that I am not sure how to handle. According to the authors, they found
>> negative reliability and, because of that, they applied a correction
>> suggested by Krus and Helmstadter(1993). Thus, I am wondering if I should
>> use the original correlation or the corrected one. When authors applied the
>> Spearman-Brown correction I reverted them to the original score, but with
>> this one I don't know if such an approach is OK. My intuition would be to
>> use the uncorrected measure since that's the most common approach in the
>> sample and there isn't sufficient information to allow us to test the
>> impact of these corrections. But I would appreciate your input on this.
>> A second issue, but somewhat in line with the previous one, what do you
>> recommend one to do when multiple approaches are used to compute the
>> reliability of the task but only one converges with what was typically done
>> by other authors? I wouldn't be able to assess whether the decisions made
>> an impact on the reliability as it is only one study but also don't want to
>> bias the findings with my selection (though I have to say the results are
>> quite consistent across approaches).
>> Thank you.
>> Best wishes,
>> Catia

More information about the R-sig-meta-analysis mailing list