[R-meta] Meta-analysis per level or meta-regression

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Mon Mar 20 19:27:11 CET 2023


Hi Catia,

I don't know of research that has looked at differences between these
approaches empirically.

I would interpret the issue in terms of a difference between two
meta-regression models: one in which the between-study heterogeneity is
constrained to be equal across levels of the moderator and one in which the
between-study heterogeneity is allowed to differ by level of the moderator.
María Rubio-Aparicio and colleagues compared these two models in a
simulation study:
https://doi.org/10.1080/00220973.2018.1561404

It's also now possible to fit and compare both models using metafor:
res_hom <- rma(yi, vi, mods = ~ alloc, data=dat)
res_het <- rma(yi, vi, mods = ~ alloc, scale = ~ alloc, data=dat)
anova(res_het, res_hom) # Likelihood ratio test and model fit statistics

Some analysts would simply fit both models and justify
their preferred model based on the fit statistics. Others might argue that
it's preferable to always use the more flexible model for purposes of
testing moderators; see Rodriguez et al. (2023;
https://doi.org/10.1111/bmsp.12299).

James

On Mon, Mar 20, 2023 at 1:04 PM Catia Oliveira via R-sig-meta-analysis <
r-sig-meta-analysis using r-project.org> wrote:

> Dear all,
>
> Does anyone know of a manuscript that has compared the effect sizes when
> running separate meta-analyses per level of a variable of interest against
> those of running a meta-regression where we remove the intercept?
>
> e.g.,
>
> ### mixed-effects meta-regression model with categorical moderator
> res <- rma(yi, vi, mods = ~ alloc, data=dat)
> res
>
> You will find:
>
> Test of Moderators (coefficients 2:3):
> QM(df = 2) = 1.7675, p-val = 0.4132
>
> Model Results:
>
>                  estimate      se     zval    pval    ci.lb   ci.ub
> intrcpt           -0.5180  0.4412  -1.1740  0.2404  -1.3827  0.3468
> allocrandom       -0.4478  0.5158  -0.8682  0.3853  -1.4588  0.5632
> allocsystematic    0.0890  0.5600   0.1590  0.8737  -1.0086  1.1867
>
>
> Instead of doing this, we could also run one meta-analysis for allocrandom
> and another for allocsystematic.
> I know the results will be similar, I just need to have something that
> proves this beyond running the model and presenting the findings. Also,
> meta-regression allows us to compare the different levels, which is the
> point. I don't understand why we are questioned about this when running a
> meta-regression but if this was a linear regression using this approach
> would be standard.
>
> Best wishes,
>
> Catia
>
> --
> Cátia Margarida Ferreira de Oliveira
> Research Associate
> Department of Psychology, Room C222
> University of York, YO10 5DD
> Twitter: @CatiaMOliveira
> pronouns: she, her
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list