[R-meta] Meta Analysis on interaction effects in different designs

Selma Rudert rudert @end|ng |rom un|-|@nd@u@de
Mon Feb 15 18:27:30 CET 2021


Hi,

first of all, I need to diclose that I am pretty much a newbie to meta-analysis. I am familiar with the general idea and the procedure of a „simple“ meta analysis. But I have an issue that seems to highly specific to me that I wasn’t able to find an answer in the literature or a previous posting, so I’d be happy for expert opinions.

I currently have a manuscript in peer review in whic the Editor asked us to do a mini meta-analysis over the four studies in the paper. All four studies use the same DV and manipulate the same factors, however, they differ in the implemented design:

In Studies 1 and 2, two of the factors (A and B) are manipulated between-subject and one within-subject (C). That is, each participant gives two responses in total. 

In Studies 3 and 4, participants rate 40 stimuli that differ on two on factors A and B. Again, each stimulus is rated twice (factor C). So each participant gives 80 responses in total, the variables of interest are assessed within-subject and to analyze data. To analze, we used a linear mixed model/multilevel model with stimuli and participant as random factors. 

The critical effect that the Editor is interested in is a rather complex three-way interaction AxBxC. Is it appropriate to summarize effect sizes of interaction terms in a meta analysis? From a former post on this mailing list I assumed it can be done: https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-February/000658.html <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-February/000658.html> However, I am wondering whether it is appropriate to combine effect sizes given the differences in the designs (between subjects elements in S1 and 2 but not in S3 and S4) and the different metric of the effect sizes (in the LMM in Studies 3 and 4 we use an approximation of d as suggested by Westfall (2014) that uses the  the estimated variance components instead of the standard deviation). I read Morris& DeShon (2002) and understood that it is a potential problem to combine effect sizes that do not use the same metric - unfortunately, the transformation they suggest refers to combining effect sizes derived from comparisons of independent groups vs. repeated measures design and does not extend to linear mixed models.

One idea that I had is to follow the approach of Goh, Hall & Rosenthal (2016) for a random effects approach (which would mean to basically avagerage effect sizes and just ignore all differences in design, metric etc.) I’d be thankful for  any thoughts on whether such a meta analysis can and should  reasonably be done, any alternative suggestions, or whether due to the differences between the designs, it would be advisable to stay away from it.

Best,

Selma
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20210215/7297e65d/attachment.html>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5257 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20210215/7297e65d/attachment.p7s>


More information about the R-sig-meta-analysis mailing list