[R-meta] negative reliability
Catia Oliveira
c@t|@@o||ve|r@ @end|ng |rom york@@c@uk
Sun Apr 30 03:14:11 CEST 2023
Dear Michael,
Thank you for your help. I truly appreciate it.
Best wishes,
Catia
On Sat, 29 Apr 2023 at 10:47, Michael Dewey <lists using dewey.myzen.co.uk> wrote:
> I think I would just include the one where they use the same method as
> other authors. That seems simpler and avoids inroducing unnecessary
> heterogeneity.
>
> Michael
>
> On 28/04/2023 23:25, Catia Oliveira wrote:
> > Dear Michael and Wolfgang,
> >
> > Thank you for your reply. I am also intrigued about why a negative
> > correlation between test and retest would be encountered. I will think
> > carefully and plan to do sensitivity analysis.
> > Do you have any thoughts about the second question?
> >
> > A second issue, but somewhat in line with the previous one, what do you
> > recommend one to do when multiple approaches are used to compute the
> > reliability of the task but only one converges with what was typically
> > done by other authors? I wouldn't be able to assess whether the
> > decisions made an impact on the reliability as it is only one study but
> > also don't want to bias the findings with my selection (though I have to
> > say the results are quite consistent across approaches). Or do you think
> > I should just include all the results as I am already using a multilevel
> > approach? I just wouldn't be able to test whether the manipulations
> > affect the results and may be increasing the heterogeneity of the
> results.
> >
> > Thank you!
> >
> > Catia
> >
> > On Fri, 28 Apr 2023 at 11:44, Viechtbauer, Wolfgang (NP)
> > <wolfgang.viechtbauer using maastrichtuniversity.nl
> > <mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>> wrote:
> >
> > I am also not familiar with the correction by Krus and Helmstadter
> > (1993) (seems to be this article in case anybody is interested:
> > https://doi.org/10.1177/0013164493053003005
> > <https://doi.org/10.1177/0013164493053003005>) so I cannot really
> > comment on this. I think in the end, you just have to make your own
> > decisions here (unless somebody comes with further wisdom) and
> > document your choices. As always, a sensitivity analysis is also an
> > option.
> >
> > P.S.: Your message also arrived in my Inbox (if you recall from our
> > previous communication, this was an issue in the past, but with the
> > adjusted settings to the mailing list, this now seems to be
> resolved).
> >
> > Best,
> > Wolfgang
> >
> > >-----Original Message-----
> > >From: Michael Dewey [mailto:lists using dewey.myzen.co.uk
> > <mailto:lists using dewey.myzen.co.uk>]
> > >Sent: Friday, 28 April, 2023 11:01
> > >To: R Special Interest Group for Meta-Analysis; Viechtbauer,
> > Wolfgang (NP); James
> > >Pustejovsky
> > >Cc: Catia Oliveira
> > >Subject: Re: [R-meta] negative reliability
> > >
> > >Dear Catia
> > >
> > >You can check whether it was transmitted by going to
> > >
> > >
> https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2023-April/author.html
> <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2023-April/author.html
> >
> > >
> > >Where it appears.
> > >
> > >The fact that you got no response may be because we are all
> struggling
> > >with the idea of a test-retest or split-half reliability estimate
> > which
> > >was negative and what we would do with it. So people who scored
> > high the
> > >first time now score low? If it is split-half it suggests that the
> > >hypothesis that the test measures one thing is false.
> > >
> > >Michael
> > >
> > >On 28/04/2023 01:45, Catia Oliveira via R-sig-meta-analysis wrote:
> > >> Dear all,
> > >>
> > >> I apologise if I am spamming you but I think you didn't receive
> > my previous
> > >> email. At least I was not notified.
> > >>
> > >> I am running a meta-analysis on the reliability of a task
> > (computed as a
> > >> correlation between sessions or halves of the task depending on
> > whether it
> > >> is test-retest or split-half reliability) and I have come across
> > one result
> > >> that I am not sure how to handle. According to the authors, they
> > found
> > >> negative reliability and, because of that, they applied a
> correction
> > >> suggested by Krus and Helmstadter(1993). Thus, I am wondering if
> > I should
> > >> use the original correlation or the corrected one. When authors
> > applied the
> > >> Spearman-Brown correction I reverted them to the original score,
> > but with
> > >> this one I don't know if such an approach is OK. My intuition
> > would be to
> > >> use the uncorrected measure since that's the most common
> > approach in the
> > >> sample and there isn't sufficient information to allow us to
> > test the
> > >> impact of these corrections. But I would appreciate your input
> > on this.
> > >>
> > >> A second issue, but somewhat in line with the previous one, what
> > do you
> > >> recommend one to do when multiple approaches are used to compute
> the
> > >> reliability of the task but only one converges with what was
> > typically done
> > >> by other authors? I wouldn't be able to assess whether the
> > decisions made
> > >> an impact on the reliability as it is only one study but also
> > don't want to
> > >> bias the findings with my selection (though I have to say the
> > results are
> > >> quite consistent across approaches).
> > >>
> > >> Thank you.
> > >>
> > >> Best wishes,
> > >>
> > >> Catia
> >
> >
> > <
> http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
> Virus-free.www.avg.com <
> http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient
> >
> >
> > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> --
> Michael
> http://www.dewey.myzen.co.uk/home.html
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list