[R-meta] 2*2 ANOVA and meta-analysis
Viechtbauer Wolfgang (SP)
wolfgang.viechtbauer at maastrichtuniversity.nl
Tue Jan 2 11:09:03 CET 2018
Catching up on emails after a busy December.
I don't think a SMD value based on a 'simple' two-group comparison is theoretically comparable to a SMD value that reflects a 2x2 interaction. The latter reflects the difference between the two simple effects, which is something entirely different. Maybe you could just compute the two simple effects for the 2x2 designs, although you would still have to decide whether to compute simple effects for A1 - A2 and B1 - B2 or simple effects for A1 - B1 and A2 - B2. But maybe one of these two possibilities is more aligned with the two groups being compared in the 'simple' two-group studies.
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of robert ross
Sent: Monday, 04 December, 2017 18:29
To: r-sig-meta-analysis at r-project.org
Subject: [R-meta] 2*2 ANOVA and meta-analysis
I’m performing a meta-analysis using the standardised mean difference. In most cases, effect sizes are calculated by comparing a control group and an experimental group using a t-test. However, a small number of relevant studies have two manipulations and use a 2*2 between subjects factorial ANOVA to test for a predicted crossover interaction. Popular effect sizes calculators for two way ANOVAs include a “treatment factor” and an “other factor” and generate an effect size in terms of the treatment factor. For example:
With the studies I’m examining it doesn’t seem appropriate to treat one of the main effects as pertaining to the “treatment factor” and the other as pertaining to the “other factor” because both factors were of equal theoretical interest and both were manipulated.
Is it possible (and appropriate) to use the interaction term from the studies with 2*2 factorial design in a meta-analysis that focuses on studies with simpler t-test based designs? If so, how could this be done? If it would not possible or appropriate, are there any alternatives? In some cases I have access to raw data, so I would be able to calculate effect sizes not reported in papers.
Any comments (or pointers to relevant literature) would be much appreciated.
More information about the R-sig-meta-analysis