[R-meta] Meta-analysis per level or meta-regression

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Mar 20 20:24:00 CET 2023


Just to add to this; these two pages on the metafor website are relevant to this discussion:

https://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
https://www.metafor-project.org/doku.php/tips:different_tau2_across_subgroups

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of James Pustejovsky via R-sig-meta-analysis
>Sent: Monday, 20 March, 2023 19:27
>To: R Special Interest Group for Meta-Analysis
>Cc: James Pustejovsky
>Subject: Re: [R-meta] Meta-analysis per level or meta-regression
>
>Hi Catia,
>
>I don't know of research that has looked at differences between these
>approaches empirically.
>
>I would interpret the issue in terms of a difference between two
>meta-regression models: one in which the between-study heterogeneity is
>constrained to be equal across levels of the moderator and one in which the
>between-study heterogeneity is allowed to differ by level of the moderator.
>María Rubio-Aparicio and colleagues compared these two models in a
>simulation study:
>https://doi.org/10.1080/00220973.2018.1561404
>
>It's also now possible to fit and compare both models using metafor:
>res_hom <- rma(yi, vi, mods = ~ alloc, data=dat)
>res_het <- rma(yi, vi, mods = ~ alloc, scale = ~ alloc, data=dat)
>anova(res_het, res_hom) # Likelihood ratio test and model fit statistics
>
>Some analysts would simply fit both models and justify
>their preferred model based on the fit statistics. Others might argue that
>it's preferable to always use the more flexible model for purposes of
>testing moderators; see Rodriguez et al. (2023;
>https://doi.org/10.1111/bmsp.12299).
>
>James
>
>On Mon, Mar 20, 2023 at 1:04 PM Catia Oliveira via R-sig-meta-analysis <
>r-sig-meta-analysis using r-project.org> wrote:
>
>> Dear all,
>>
>> Does anyone know of a manuscript that has compared the effect sizes when
>> running separate meta-analyses per level of a variable of interest against
>> those of running a meta-regression where we remove the intercept?
>>
>> e.g.,
>>
>> ### mixed-effects meta-regression model with categorical moderator
>> res <- rma(yi, vi, mods = ~ alloc, data=dat)
>> res
>>
>> You will find:
>>
>> Test of Moderators (coefficients 2:3):
>> QM(df = 2) = 1.7675, p-val = 0.4132
>>
>> Model Results:
>>
>>                  estimate      se     zval    pval    ci.lb   ci.ub
>> intrcpt           -0.5180  0.4412  -1.1740  0.2404  -1.3827  0.3468
>> allocrandom       -0.4478  0.5158  -0.8682  0.3853  -1.4588  0.5632
>> allocsystematic    0.0890  0.5600   0.1590  0.8737  -1.0086  1.1867
>>
>>
>> Instead of doing this, we could also run one meta-analysis for allocrandom
>> and another for allocsystematic.
>> I know the results will be similar, I just need to have something that
>> proves this beyond running the model and presenting the findings. Also,
>> meta-regression allows us to compare the different levels, which is the
>> point. I don't understand why we are questioned about this when running a
>> meta-regression but if this was a linear regression using this approach
>> would be standard.
>>
>> Best wishes,
>>
>> Catia
>>
>> --
>> Cátia Margarida Ferreira de Oliveira
>> Research Associate
>> Department of Psychology, Room C222
>> University of York, YO10 5DD
>> Twitter: @CatiaMOliveira
>> pronouns: she, her


More information about the R-sig-meta-analysis mailing list