[R-meta] pre to post treatment correlation: discrepancy in effect sizes

James Pustejovsky jepu@to @ending from gm@il@com
Fri Jun 22 14:01:00 CEST 2018


Laura,

Your question involves a judgement call where I think there are legitimate
arguments on both sides:

- On the one hand, the primary study analysis is based on raw data and so
will presumably be more accurate than the approximations you have to make
working only with reported summary information. This would argue for trying
to match the primary study analysis as closely as possible. In fact, if the
p-value, t-statistic, SD of the pre/post change, or SE of the pre/post
change is reported along with the summary statistics in the primary study,
then it is possible to "back out" the sample correlation between Pre and
Post. Using this correlation, your effect size SE will be consistent with
what is reported in the primary study.

- On the other hand, primary studies might involve all sort of questionable
analytic practices to achieve statistical significance, so it is worth
being skeptical of the analysis reported in primary studies.

On balance, the approach I would take is to treat all of the studies
uniformly, i.e., if you are able to back out sample correlations from the
bulk of the studies, then go ahead and do that, but don't make a special
exception for this one study just because you get a different significance
level. I think it would be reasonable to justify the original approach you
took on the grounds that you've followed a consistent method for extracting
effect size estimates from the primary studies included in the synthesis.
You could also report a sensitivity analysis using robust variance
estimation to allow for potentially inaccurate SEs due to the
approximations you've had to make.

James

On Fri, Jun 22, 2018 at 4:28 AM Steenhuis, L.A. <l.a.steenhuis using rug.nl>
wrote:

>  Hi all,
>
> I have a question about the pre to post treatment correlation that is used
> to calculate the standard error for effect sizes. We are currently
> conducting a meta-analysis about yoga interventions for depression and
> anxiety symptoms. I have calculated the effect size using the post mean -
> pre mean / SD pre. The standard error is calculated using, among others,
> the pre to post treatment correlation. In line with the advice that a
> conservative threshold of 0.7 is the best to assume when one does not know
> the true correlation, I have assumed this correlation for my studies.
>
> The problem is as follows. There is one study which reports a significant
> effect in the article. However, when I calculate the effect size it is not
> significant. When I change the correlation to 0.8, it becomes significant.
> My question is - do I stick to the assumption of 0.7, being aware this
> causes discrepancy with the original study (insignificant instead of
> significant), or do I make an exception for this study and use 0.8? In
> addition - how is it possible that they found an effect, and we do not? Is
> that because the true correlation is in fact 0.8?
>
> The paper has been under review and we are currently updating the
> manuscript to be resubmitted (with either leaving the correlation at 0.7 or
> 0.8 for this specific study in the meta-analysis). I am looking forward to
> your comments, many thanks in advance.
>
> Best,
>
> Laura A. Steenhuis, MSc
> PhD researcher University of Groningen
> Clinical Psychology and Experimental Psychopathology
> tel. (0031) 50 363 6825
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list