[R-meta] Assessing selection bias / multivariate meta-analysis

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Thu Nov 21 13:31:23 CET 2024


And I just stumbled across this:

https://github.com/jepusto/metaselection

James, don't hide all your good work from us!

Best,
Wolfgang

> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On Behalf
> Of Viechtbauer, Wolfgang (NP) via R-sig-meta-analysis
> Sent: Thursday, November 21, 2024 13:21
> To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis using r-
> project.org>
> Cc: Viechtbauer, Wolfgang (NP) <wolfgang.viechtbauer using maastrichtuniversity.nl>
> Subject: Re: [R-meta] Assessing selection bias / multivariate meta-analysis
>
> Dear Pia,
>
> Generally, I don't think there really is any method that is going to be a great
> choice here. The 'Egger sandwich' (i.e., an Egger type regression model using
> cluster-robust inference methods) is a decent option, since it logically
> generalizes the standard Egger regression method to this context, but it is
> unclear what kind of bias/selection effect this may pick up (missing studies,
> missing estimates within studies, a combination thereof).
>
> Yes, for the 3PSM, you would have to either ignore the dependencies or select
> one estimate per study (and maybe repeat the latter a large number of times for
> different subsets).
>
> I assume you are familiar with these papers. If not, they are directly relevant:
>
> Rodgers, M. A., & Pustejovsky, J. E. (2021). Evaluating meta-analytic methods to
> detect selective reporting in the presence of dependent effect sizes.
> Psychological Methods, 26(2), 141-160. https://doi.org/10.1037/met0000300
>
> Fernández-Castilla, B., Declercq, L., Jamshidi, L., Beretvas, S. N., Onghena,
> P., & Van den Noortgate, W. (2021). Detecting selection bias in meta-analyses
> with multiple outcomes: A simulation study. The Journal of Experimental
> Education, 89(1), 125-144. https://doi.org/10.1080/00220973.2019.1582470
>
> Nakagawa, S., Lagisz, M., Jennions, M. D., Koricheva, J., Noble, D. W. A.,
> Parker, T. H., Sánchez-Tójar, A., Yang, Y., & O'Dea, R. E. (2022). Methods for
> testing publication bias in ecological and evolutionary meta-analyses. Methods
> in Ecology and Evolution, 13(1), 4-21. https://doi.org/10.1111/2041-210X.13724
>
> I think James is working on some methods related to this topic:
>
> https://jepusto.com/posts/cluster-bootstrap-selection-model/
>
> Best,
> Wolfgang
>
> > -----Original Message-----
> > From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On
> Behalf
> > Of Pia-Magdalena Schmidt via R-sig-meta-analysis
> > Sent: Wednesday, November 20, 2024 21:58
> > To: r-sig-meta-analysis using r-project.org
> > Cc: Pia-Magdalena Schmidt <pia-magdalena.schmidt using uni-bonn.de>
> > Subject: [R-meta] Assessing selection bias / multivariate meta-analysis
> >
> > Dear all,
> > Although this topic has been discussed several times and I read the archives
> > and referenced papers, I’m still not sure how to assess and possibly correct
> > for selection bias in multivariate meta-analyses.
> >
> > I used the metafor package and ran meta-analyses with SMCC as effect size
> > (all studies used within-designs) and fitted rma.mv models as several
> > studies report more than one effect size. Furthermore, I used cluster-robust
> > methods to examine the robustness of the models.
> > For a subset of my data, I used meta-regressions with one continuous
> > moderator.
> > All effect sizes are from published journal articles. The range of included
> > studies is between 30 and 6 with a number of effect sizes between 45 and 10.
> >
> > Since I want to take the dependencies into account, I would not use funnel
> > plots or trim and fill. I wonder if using Egger's regression test adjusted
> > for rma.mv as well as PET-PEESE and perhaps the sensitivity analysis
> > suggested by Mathur & Vanderweele (2020) as well as 3PSM would be a
> > reasonable way to go? Although the latter would only use one effect size per
> > study or an aggregated effect size, right?
> >
> > I would be very grateful for any recommendations!
> > Best,
> > Pia
> >
> > Below is an excerpt from my code:
> > ES_all <- escalc(measure="SMCC", m1i= m1i, sd1i= sd1i, ni = ni, m2i= m2i,
> > sd2i= sd2i, pi= pi, ri = ri, data= dat)
> > V <- vcalc(vi=ES_all$vi, cluster=id_database, obs = effect_id, rho =0.605,
> > data=dat)
> > res <- rma.mv(yi=ES_all$yi, V, random = ~ 1 | id_database/effect_id, data =
> > dat)
> > res.robust <- robust(res, cluster = id_database, clubSandwich = TRUE)
> >
> > # subset
> > res_LOR <- rma.mv(yi=ES_LOR$yi, V, random = ~ 1 | id_database/effect_id,
> > mods = ~ dose, data = dat)


More information about the R-sig-meta-analysis mailing list