[R-meta] Meta Analysis on interaction effects in different designs
Selma Rudert
rudert @end|ng |rom un|-|@nd@u@de
Thu Apr 15 08:24:27 CEST 2021
Hi James,
I just realized I never replied to this, because I wanted to discuss your answer with my colleagues first and then I forgot! So this is very belated, but: Thank you so much for your detailed reply! We decided to go with Option 3, the Integrative Data Analysis, because that was another option we were playing around at that time anyways. The only issue was that due o the difference in designs, we couldn’t accommodate any random intercepts for the stimuli as well as random slopes for interaction terms within that model. Eventually, full integrative data analysis ended up being reported in the supplement only, whereas we summarize Study 1&2 and 3&4 in two seperate analyses in the paper.
The Editor ended up being happy with our suggestion and the paper is now accepted. Thank you once again for your advice!
Best
Selma
> Am 27.02.2021 um 17:17 schrieb James Pustejovsky <jepusto using gmail.com>:
>
> Hi Selma,
>
> This is a tricky question, which may be why you haven't received any response from the listserv. To provide a partial answer:
> 1) Yes, in principle it's find to meta-analyze interaction terms.
> 2) The point of using standardized effect sizes is to provide a means of putting effects on a commensurable scale. But you presumably have the raw data from these studies, which opens up further possibilities besides using standardized mean differences. How are the outcome measurements in studies 1 and 2 related to the outcomes in studies 3 and 4? Perhaps there would be some other way of equating them.
>
> For instance, say that studies 1 and 2 use two binary outcomes whereas studies 3 and 4 use 20 binary outcomes per condition. However, for a given participant, every binary outcome has the same probability of being positive. That is, in
> - In study 1 and 2: Yi ~ Binomial(2, p_i)
> - In study 3 and 4: Yi ~ Binomial(20, p_i)
> This would suggest that if you put all of the outcomes on a [0,1] scale, then you can treat the effects (whether main effects or interactions) as changes in average probabilities across participants.
>
> 3) If you can find a way to equate the outcomes across all four studies, then you could consider two different ways of synthesizing the effects. One would be to put the outcomes on the common scale and then estimate the interaction terms (and standard errors) from each study. Then average the interaction terms together using fixed effect meta-analysis. Another approach would be to pool all of the raw data together (across all four studies) and develop one joint model for the main effects and interaction terms. The latter approach is sometimes called "integrative data analysis." See here for all the details:
> Curran, P. J., & Hussong, A. M. (2009). Integrative data analysis: the simultaneous analysis of multiple data sets. Psychological Methods, 14(2), 81.
>
> James
>
> On Mon, Feb 15, 2021 at 11:28 AM Selma Rudert <rudert using uni-landau.de <mailto:rudert using uni-landau.de>> wrote:
> Hi,
>
> first of all, I need to diclose that I am pretty much a newbie to meta-analysis. I am familiar with the general idea and the procedure of a „simple“ meta analysis. But I have an issue that seems to highly specific to me that I wasn’t able to find an answer in the literature or a previous posting, so I’d be happy for expert opinions.
>
> I currently have a manuscript in peer review in whic the Editor asked us to do a mini meta-analysis over the four studies in the paper. All four studies use the same DV and manipulate the same factors, however, they differ in the implemented design:
>
> In Studies 1 and 2, two of the factors (A and B) are manipulated between-subject and one within-subject (C). That is, each participant gives two responses in total.
>
> In Studies 3 and 4, participants rate 40 stimuli that differ on two on factors A and B. Again, each stimulus is rated twice (factor C). So each participant gives 80 responses in total, the variables of interest are assessed within-subject and to analyze data. To analze, we used a linear mixed model/multilevel model with stimuli and participant as random factors.
>
> The critical effect that the Editor is interested in is a rather complex three-way interaction AxBxC. Is it appropriate to summarize effect sizes of interaction terms in a meta analysis? From a former post on this mailing list I assumed it can be done: https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-February/000658.html <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-February/000658.html> However, I am wondering whether it is appropriate to combine effect sizes given the differences in the designs (between subjects elements in S1 and 2 but not in S3 and S4) and the different metric of the effect sizes (in the LMM in Studies 3 and 4 we use an approximation of d as suggested by Westfall (2014) that uses the the estimated variance components instead of the standard deviation). I read Morris& DeShon (2002) and understood that it is a potential problem to combine effect sizes that do not use the same metric - unfortunately, the transformation they suggest refers to combining effect sizes derived from comparisons of independent groups vs. repeated measures design and does not extend to linear mixed models.
>
> One idea that I had is to follow the approach of Goh, Hall & Rosenthal (2016) for a random effects approach (which would mean to basically avagerage effect sizes and just ignore all differences in design, metric etc.) I’d be thankful for any thoughts on whether such a meta analysis can and should reasonably be done, any alternative suggestions, or whether due to the differences between the designs, it would be advisable to stay away from it.
>
> Best,
>
> Selma
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org <mailto:R-sig-meta-analysis using r-project.org>
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis <https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list