[R-meta] Combining studies with different within-subject designs
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Fri Feb 18 11:10:18 CET 2022
I am confused why you call case 2 a 'within-study design'. That looks like a two-group post-test only design to me.
In any case, for all three cases, you want to use 'raw score standardization' to make the various effect size measures comparable, at least under some restrictive assumptions, such as that there is no inherent time effect or time by group interaction.
So, for case 2, you compute standard SMDs, which use the SD at a single timepoint (i.e. at post). This is in essence 'raw score standardization'.
Ananlogously, for case 3, you also use 'raw score standardization', so measure "SMCR" in escalc() lingo.
And finally, for case 1, you again want to standardize based on the SD of a single timepoint, not the SD of the change scores. See:
for a discussion of how to do this.
If authors do not report the required raw score SDs, then you will have to get creative getting these (e.g., contacting authors, back-calculating them based on the SD of the change scores and other information provided, guestimating them).
I would also code the type of design that was used in each study (and hence the type of effect size measure that was computed) to examine whether there are still systematic differences between these groups, even if the type of standardization was the same across measures.
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Pablo Grassi
>Sent: Wednesday, 16 February, 2022 16:14
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] Combining studies with different within-subject designs
>Maybe some of you can help me out with the following design-conundrum. I
>am currently performing a series of meta-analysis investigating the
>effect on an intervention in closely related outcomes. Most of the
>reviewed studies are within-subject designs (one group of participants),
>for which the standardized mean differences (SMD; Hedge's g) are
>differences of change scores divided by the SD of the difference (with
>change score standardization), as follows:
> SMD = M_diff / SDdiff (for simplicity in this E-mail without the
>Unfortunately, there is a huge variability in the control measurements
>used. Roughtly following the nomenclature from Morris 2008, I have the
>following different design-cases:
> Case 1) Within-subject desing, pre-post-control (WS_PPC):
> M_diff_ws_pcc = (post_Treatment - pre_Treatment) -
>(post_Control - pre_Control)
>However, some others had no baseline pre-intervention measurement (i.e.
>only report post-intervention measurements), i.e. Post-test only with
>control design (WS_POWC)
> Case 2) M_diff_ws_powc = (post_Treatment - post_Control)
>And some few others just have a post-pre measurement but no control
>(i.e. only report a change score, single-group pre-post, SGPP), so that:
> Case 3) M_diff_ws_sgpp = (post_Treatment - pre_Treatment)
>Then, while the M_diff if Case 1 and 2 is measuring +/- the same effect,
>their SDdiffs (and thus SMDs) are not really comparable as they reflect
> * What would be the "standard" approach to include the within-subject
> design studies (Case 1, 2, 3) in the same meta-analysis, if this is
> possible at all? (Please consider that most of the publications
> report ONLY the SD of the change scores and NOT the SD of the pre
> and post conditions separately or in case 2 of the post-interventions).
More information about the R-sig-meta-analysis