[R-meta] Calculating SMD from posttest only - is this possible?

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Mon Oct 7 02:59:20 CEST 2019


Just to add a bit to Wolfgang's response: I think it is helpful to consider
this question under the framework of risk of bias.

If an experiment is properly randomized and there is no attrition, then the
difference in post-test means is an unbiased estimator of the average
treatment effect (ATE). The same is true in a pre/post experiment: if the
randomization is properly conducted and there is not attrition, then the
difference in mean change scores is an unbiased estimator of the ATE
(because the difference in pre-test means has expectation zero). In other
words, both designs are used to estimate the same target parameter (albeit
with different degrees of precision).

Where discrepancies might arise is if the design isn't perfect, such as if
the randomization might have been faulty (due to lack of allocation
concealment or something) or if there is substantial attrition, either of
which could lead to bias in the estimate of the ATE. The pre-post design is
probably more robust to such imperfections because it adjusts for baseline
differences. The post-test only design is likely less robust because
there's no baseline covariates that could be used to adjust for differences
between groups at post-test. Considered in light, whether it's a good idea
to include post-test only designs depends on whether the post-test only
designs are at low risk of bias.


On Sat, Oct 5, 2019 at 4:46 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:

> Hi Roberto,
> Yes. In fact, that's essentially the original definition of the SMD, that
> is, the difference between two groups at the posttest divided by the SD (at
> the posttest).
> One can debate to what extent the standardized mean change (of a single
> group), the difference between two standardized mean changes (of two
> groups), and the standardized mean difference at the posttest (again, of
> two groups) can be combined in a single analysis. The canonical reference
> regarding this issue is:
> Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in
> meta-analysis with repeated measures and independent-groups designs.
> Psychological Methods, 7(1), 105-125.
> Best,
> Wolfgang
> -----Original Message-----
> From: R-sig-meta-analysis [mailto:
> r-sig-meta-analysis-bounces using r-project.org] On Behalf Of P. Roberto Bakker
> Sent: Thursday, 03 October, 2019 18:54
> To: r-sig-meta-analysis using r-project.org
> Subject: [R-meta] Calculating SMD from posttest only - is this possible?
> Hi everybody
> I am performing a meta-analysis (with 'metafor' R-project) as described by
> Becker (1988), and I compute the standardized mean change for a treatment
> and control group.
> I compute a (standardized) effect size measure for pretest posttest control
> group designs, where the characteristic, response, or dependent variable
> assessed in the individual studies is a quantitative variable (Morris
> 2008).
> Now, two new articles are published recently with only the posttests.
> Question: can SMDs still be calculated with only the posttest (and sd) of
> treatment and control groups?
> Thank you in advance, Roberto
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis

	[[alternative HTML version deleted]]

More information about the R-sig-meta-analysis mailing list