[R-meta] negative reliability
c@t|@@o||ve|r@ @end|ng |rom york@@c@uk
Sat Apr 29 00:25:28 CEST 2023
Dear Michael and Wolfgang,
Thank you for your reply. I am also intrigued about why a negative
correlation between test and retest would be encountered. I will think
carefully and plan to do sensitivity analysis.
Do you have any thoughts about the second question?
A second issue, but somewhat in line with the previous one, what do you
recommend one to do when multiple approaches are used to compute the
reliability of the task but only one converges with what was typically done
by other authors? I wouldn't be able to assess whether the decisions made
an impact on the reliability as it is only one study but also don't want to
bias the findings with my selection (though I have to say the results are
quite consistent across approaches). Or do you think I should just include
all the results as I am already using a multilevel approach? I just
wouldn't be able to test whether the manipulations affect the results and
may be increasing the heterogeneity of the results.
On Fri, 28 Apr 2023 at 11:44, Viechtbauer, Wolfgang (NP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> I am also not familiar with the correction by Krus and Helmstadter (1993)
> (seems to be this article in case anybody is interested:
> https://doi.org/10.1177/0013164493053003005) so I cannot really comment
> on this. I think in the end, you just have to make your own decisions here
> (unless somebody comes with further wisdom) and document your choices. As
> always, a sensitivity analysis is also an option.
> P.S.: Your message also arrived in my Inbox (if you recall from our
> previous communication, this was an issue in the past, but with the
> adjusted settings to the mailing list, this now seems to be resolved).
> >-----Original Message-----
> >From: Michael Dewey [mailto:lists using dewey.myzen.co.uk]
> >Sent: Friday, 28 April, 2023 11:01
> >To: R Special Interest Group for Meta-Analysis; Viechtbauer, Wolfgang
> (NP); James
> >Cc: Catia Oliveira
> >Subject: Re: [R-meta] negative reliability
> >Dear Catia
> >You can check whether it was transmitted by going to
> >Where it appears.
> >The fact that you got no response may be because we are all struggling
> >with the idea of a test-retest or split-half reliability estimate which
> >was negative and what we would do with it. So people who scored high the
> >first time now score low? If it is split-half it suggests that the
> >hypothesis that the test measures one thing is false.
> >On 28/04/2023 01:45, Catia Oliveira via R-sig-meta-analysis wrote:
> >> Dear all,
> >> I apologise if I am spamming you but I think you didn't receive my
> >> email. At least I was not notified.
> >> I am running a meta-analysis on the reliability of a task (computed as a
> >> correlation between sessions or halves of the task depending on whether
> >> is test-retest or split-half reliability) and I have come across one
> >> that I am not sure how to handle. According to the authors, they found
> >> negative reliability and, because of that, they applied a correction
> >> suggested by Krus and Helmstadter(1993). Thus, I am wondering if I
> >> use the original correlation or the corrected one. When authors applied
> >> Spearman-Brown correction I reverted them to the original score, but
> >> this one I don't know if such an approach is OK. My intuition would be
> >> use the uncorrected measure since that's the most common approach in the
> >> sample and there isn't sufficient information to allow us to test the
> >> impact of these corrections. But I would appreciate your input on this.
> >> A second issue, but somewhat in line with the previous one, what do you
> >> recommend one to do when multiple approaches are used to compute the
> >> reliability of the task but only one converges with what was typically
> >> by other authors? I wouldn't be able to assess whether the decisions
> >> an impact on the reliability as it is only one study but also don't
> want to
> >> bias the findings with my selection (though I have to say the results
> >> quite consistent across approaches).
> >> Thank you.
> >> Best wishes,
> >> Catia
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis