[R-meta] Treating multiple conditions in within-subjects designs
Viechtbauer, Wolfgang (NP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Wed Jan 8 14:09:15 CET 2025
Dear Nikoletta,
Please see below for my responses.
Best,
Wolfgang
> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On Behalf
> Of Nikoletta Symeonidou via R-sig-meta-analysis
> Sent: Wednesday, January 8, 2025 11:46
> To: r-sig-meta-analysis using r-project.org
> Cc: Nikoletta Symeonidou <nikoletta.symeonidou using uni-mannheim.de>
> Subject: [R-meta] Treating multiple conditions in within-subjects designs
>
> Dear list-members,
>
> We are currently conducting a meta-analysis using a hierarchical three-level
> model with the metafor package, where the highest level of the hierarchy is
> "study" (i.e., outcomes nested within studies). We face a challenge with studies
> that include more than one control condition and we are unsure how to address
> this issue in within-subjects designs (i.e., same participants in all
> conditions).
> Specifically, several studies use within-subjects designs with more than two
> control conditions, which leads to multiple comparisons of interest per study
> (e.g., experimental condition vs. control condition 1, and experimental
> condition vs. control condition 2). Our main concern is how to account for the
> dependence that arises when the same experimental condition is used to calculate
> multiple effect sizes within a single study.
> - Does the hierarchical three-level model inherently account for this
> dependence?
No, or at least not fully. The three-level model accounts for potential dependence in the underlying true effects (in the example you gave, there is in theory a true effect for experimental condition vs. control condition 1 and a true effect for experimental condition vs. control condition 2). Whether such dependence in present or not is an empirical question. However, the model does not automatically account for the correlation (or covariance) between the sampling errors of the two effect sizes (i.e., using information from the experimental condition to compute the two effect sizes induces such a correlation). Roughly, that correlation is 0.5, but one can compute this more accurately, depending on the specific effect size measure used and the group sizes. The chapter by Gleser and Olkin (2009) in 'The handbook of research synthesis and meta-analysis' (2nd ed.) provides equations for several effect size measures. See also https://www.metafor-project.org/doku.php/analyses:gleser2009 for code to reproduce the calculations from that chapter. The vcalc() function from the metafor package also provides functionality to compute the covariance. This is what goes into the infamous 'V' matrix that can be used as the second argument to rma.mv().
> - If not, what strategies would you recommend for appropriately handling such
> cases (with the metafor package)?
See above.
In addition or alternatively (since constructing the V matrix can be a challenge), one could also skip constructing V, fit the three-level model assuming independent sampling errors as the 'working model', and then use cluster-robust inference methods (robust variance estimation) to fix things up (using robust(..., clubSandwich=TRUE)).
> We greatly appreciate any guidance you can provide. Thank you!
>
> Best regards,
> Nikoletta Symeonidou (University of Mannheim)
More information about the R-sig-meta-analysis
mailing list