[R-meta] pre to post treatment correlation: discrepancy in effect sizes
Steenhuis, L.A.
l@@@@teenhui@ @ending from rug@nl
Fri Jun 22 11:28:15 CEST 2018
Hi all,
I have a question about the pre to post treatment correlation that is used
to calculate the standard error for effect sizes. We are currently
conducting a meta-analysis about yoga interventions for depression and
anxiety symptoms. I have calculated the effect size using the post mean -
pre mean / SD pre. The standard error is calculated using, among others,
the pre to post treatment correlation. In line with the advice that a
conservative threshold of 0.7 is the best to assume when one does not know
the true correlation, I have assumed this correlation for my studies.
The problem is as follows. There is one study which reports a significant
effect in the article. However, when I calculate the effect size it is not
significant. When I change the correlation to 0.8, it becomes significant.
My question is - do I stick to the assumption of 0.7, being aware this
causes discrepancy with the original study (insignificant instead of
significant), or do I make an exception for this study and use 0.8? In
addition - how is it possible that they found an effect, and we do not? Is
that because the true correlation is in fact 0.8?
The paper has been under review and we are currently updating the
manuscript to be resubmitted (with either leaving the correlation at 0.7 or
0.8 for this specific study in the meta-analysis). I am looking forward to
your comments, many thanks in advance.
Best,
Laura A. Steenhuis, MSc
PhD researcher University of Groningen
Clinical Psychology and Experimental Psychopathology
tel. (0031) 50 363 6825
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list