[R-meta] Accounting for pre-treatment data

James Pustejovsky jepusto at gmail.com
Thu Nov 16 20:58:25 CET 2017


I have dealt with studies like this in some of my work. I assume that you
have means, standard deviations, and sample sizes by group for each time
point. Denote the means as M0T (pre-test mean, treatment group), M0C
(pre-test mean, control group), M1T (post-test mean, treatment group), M1C
(post-test mean, control group). Along the same lines, denote the standard
deviations as S0T, S0C, S1T, S1C and the sample sizes as N0T, N0C, N1T,
N1C. Let S0P and S1P be the pooled standard deviations at pre-test and

I am not a big fan of using the difference in standardized mean change
scores because using a different standardization factor for each group
introduces an additional source of variability in the effect size estimate.
Here are three other approaches, each with benefits and drawbacks, and
which may or may not be feasible:

Option 1: Calculate standardized mean differences based on post-test data

d1 = (M1T - M1C) / S1P.

- The variance of this estimate can be estimated as V1 = 1 / N1T + 1 / N1C
+ d^2 / 2(N1T + N1C - 2).
- With low attrition and valid randomization, these effect size estimates
should be unbiased but might have high variance because the sample sizes
are small.
- If there was substantial attrition, there might be bias in these ES

Option 2: Calculate standardized mean differences based on the difference
in mean change, standardizing by the pooled baseline SD:

d2 = (M1T - M0T - M1C + M0C) / S0P.

- Or standardize by the pooled post-test SD if you prefer.
- In either case, these effect size estimates should be unbiased if there
is low attrition and valid randomization.
- However, if there was substantial attrition, there might still be bias in
these estimates---particularly if the post-test means are based on only the
post-test respondents but the pre-test means are based on the full sample.
So option (2) is more or less the same as option (1) in terms of bias.
- Option (2) has the advantage that the effect size estimate should be a
little bit more precise than with option (1).
- A difficulty with option (2) is that estimating the variance of d2
requires knowing the correlation between the pre-test and post-test or
knowing the SD or SE of the change scores. If you have the SE of the change
scores (or SE = SD / sqrt(N) of the change scores) in each group, then you
can estimate the variance as V2 = (SE_T^2 + SE_C^2) / S0P^2 + d^2 / 2 *
(N0T + N0C - 2). If you can't get the pre-post correlation or the SE of the
change scores, then you can't really take advantage of the increased
precision of the effect size estimate, and it seems like Option (2) would
have little advantage over option (1).

Option 3: If regression-adjusted means (or the regression-adjusted mean
difference from an ANCOVA) is reported, then you could use these to
calculate standardized mean differences, standardizing by the pooled
baseline SD. If MAT and MAC are the regression-adjusted means, then this

d3 = (MAT - MAC) / S0P

- Or standardize by the pooled post-test SD if you prefer.
- To calculate the variance of d3 you need to know the standard errors of
the regression-adjusted means. If these are SET and SEC, then you can
estimate the variance as V3 = (SET^2 + SEC^2) / S0P^2 + d^2 / 2 (N0T + N0C
- 2). If the standard error of the regression-adjusted difference in means
is reported (call this SED), then the variance can be estimated as V3 =
SED^2 / S0P^2 + d^2 / 2 (N0T + N0C - 2).
- Regression adjustment is advantageous because it can reduce bias from
attrition, and it will also give more precise estimates than using change
scores or differences in post-test means. However, it is only feasible if
the component estimates are reported or can be obtained (or if you know the
pre-post correlation).

For a longer explanation and further discussion of these ideas, see


On Thu, Nov 16, 2017 at 8:12 AM, Rachel Schwartz <raschwartz7 at gmail.com>

> Hello all- I am conducting a small meta-analysis that compares 3 treatments
> against 3 independent controls. I believe the standard procedure is to
> calculate effect sizes (Hedges' g) that compare only post-treatment data,
> given the assumption that randomization should equalize pre-treatment data
> across groups. However, my 3 studies have small samples (*n*'s 19-63) --
> because randomization can fail with small samples, I don't feel comfortable
> assuming that pre-treatment data are comparable across groups. To account
> for this, I was thinking of calculating two change scores ("SMCC"): one
> within the treatment arms (pre-post) and another within the control arms
> (pre-post)... then subtracting the resulting effect sizes, as in the
> following example:
> http://r.789695.n4.nabble.com/metafor-standardized-mean-
> difference-in-pre-post-design-studies-td4705800.html
> Is this approach advisable, or should I just go with post data only?
> Perhaps there's some other way to control for pre data without resorting to
> change scores? The comprehensive meta-analysis software seems to have some
> way of accounting for pre-treatment data. Is there a way to approach the
> question similarly in metafor?
> Finally, if I should mimic the example above, is it correct to simply add
> the variances as this poster did? (datFin <- data.frame(yi = datE$yi -
> datC$yi, *vi = datE$vi + datC$vi*)
> Many thanks to anyone who has advice.
> --
> Rachel A. Schwartz, M.A.
> Ph.D. Student | University of Pennsylvania
> 425 S. University Avenue
> Philadelphia, PA 19104
> The above e-mail may contain identifiable personal health information.
> Because e-mail is not secure, please be aware there are risks that someone
> could gain access to your e-mail. Because you have chosen to communicate
> identifiable health information by email, you are consenting to take the
> risks associated with e-mail. Because e-mail is not secure, I cannot
> guarantee that information you transmit will remain confidential.
>         [[alternative HTML version deleted]]
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis

	[[alternative HTML version deleted]]

More information about the R-sig-meta-analysis mailing list