[R-meta] Assessing selection bias / multivariate meta-analysis
Pia-Magdalena Schmidt
p|@-m@gd@|en@@@chm|dt @end|ng |rom un|-bonn@de
Fri Nov 22 17:14:46 CET 2024
Dear Wolfgang & James,
Many thanks for your helpful answers and thoughts!
I had indeed read the papers you mentioned, but I hadn't come across the
metaselection package yet. I will try it out and get back to you if needed.
Best,
Pia
On Do, 21 Nov 2024 17:18:28 +0000
Viechtbauer, Wolfgang (NP) via R-sig-meta-analysis
<r-sig-meta-analysis using r-project.org> wrote:
> Thanks for providing additional info about
>metaselection. I think this package will be a very
>important tool in the meta-analytic toolbox!
>
> I was inspired by your recent blog post:
>
> https://jepusto.com/posts/beta-density-selection-models/
>
> and added the truncated beta selection model also to
>selmodel(). I took a very quick peak at your code and I
>think you are using analytic gradients, which helps to
>speed up / stabilize the model fitting. But it's nice to
>have two parallel implementations to cross-check the
>results.
>
> Best,
> Wolfgang
>
>> -----Original Message-----
>> From: James Pustejovsky <jepusto using gmail.com>
>> Sent: Thursday, November 21, 2024 15:10
>> To: R Special Interest Group for Meta-Analysis
>><r-sig-meta-analysis using r-
>> project.org>
>> Cc: Viechtbauer, Wolfgang (NP)
>><wolfgang.viechtbauer using maastrichtuniversity.nl>
>> Subject: Re: [R-meta] Assessing selection bias /
>>multivariate meta-analysis
>>
>> Was going to chime in about the metaselection package
>> (https://github.com/jepusto/metaselection)---it's still
>>under development but
>> the core functionality and documentation is in place.
>>The package implements the
>> bootstrapped selection model as demonstrated in my blog
>>post
>> (https://jepusto.com/posts/cluster-bootstrap-selection-model/),
>>but with a much
>> easier interface and faster calculation; it also
>>implements selection models
>> with cluster-robust standard errors, though these seem
>>to be not as accurate as
>> bootstrapping. Folks are welcome to give the package a
>>try and to reach out
>> with questions or potential bugs if you run into
>>anything. We are working on a
>> paper describing the methods implemented in the package
>>and reporting pretty
>> extensive simulations about their performance.
>>
>> My student Man Chen (now on the faculty at UT Austin)
>>has studied a whole bunch
>> of the available methods for selective reporting bias
>>correction, looking
>> specifically at how they perform in meta-analyses with
>>dependent effect sizes,
>> and proposing adaptations of some of the methods to
>>better acknowledge
>> dependency. Our working paper (on this is
>> here: https://osf.io/preprints/metaarxiv/jq52s
>>
>> Pia asked about a few other possible techniques:
>> - The Egger's test / PET-PEESE approach with
>>cluster-robust variance estimation
>> is reasonable but, as Wolfgang noted, it is not
>>specifically diagnostic about
>> missing studies vs. missing effects. If the effect sizes
>>nested within a given
>> study tend to have similar standard errors, then it will
>>mostly be picking up on
>> association between study sample size and study-level
>>average effect size. And
>> of course, it also has the limitation that this
>>small-study association can be
>> caused by things other than selective reporting.
>> - Mathur & Vanderweele's sensitivity analysis is quite
>>useful, though it does
>> not provide an estimate of the severity of selective
>>reporting (instead, it
>> provides information about the degree of potential bias
>>assuming a specific
>> level of selection).
>> - For 3PSM, the cluster-bootstrap technique implemented
>>in the metaselection
>> package is a way to deal with dependent effects, so it
>>is no longer necessary to
>> use ad hoc approaches like ignoring dependence,
>>aggregating to the study level,
>> or selecting a single effect per study.
>>
>> James
>>
>> On Thu, Nov 21, 2024 at 6:37 AM Viechtbauer, Wolfgang
>>(NP) via R-sig-meta-
>> analysis <mailto:r-sig-meta-analysis using r-project.org>
>>wrote:
>> And I just stumbled across this:
>>
>> https://github.com/jepusto/metaselection
>>
>> James, don't hide all your good work from us!
>>
>> Best,
>> Wolfgang
>>
>> > -----Original Message-----
>> > From: R-sig-meta-analysis
>><mailto:r-sig-meta-analysis-bounces using r-project.org>
>> On Behalf
>> > Of Viechtbauer, Wolfgang (NP) via R-sig-meta-analysis
>> > Sent: Thursday, November 21, 2024 13:21
>> > To: R Special Interest Group for Meta-Analysis
>><r-sig-meta-analysis using r-
>> > http://project.org>
>> > Cc: Viechtbauer, Wolfgang (NP)
>> <mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>
>> > Subject: Re: [R-meta] Assessing selection bias /
>>multivariate meta-analysis
>> >
>> > Dear Pia,
>> >
>> > Generally, I don't think there really is any method
>>that is going to be a
>> great
>> > choice here. The 'Egger sandwich' (i.e., an Egger type
>>regression model using
>> > cluster-robust inference methods) is a decent option,
>>since it logically
>> > generalizes the standard Egger regression method to
>>this context, but it is
>> > unclear what kind of bias/selection effect this may
>>pick up (missing studies,
>> > missing estimates within studies, a combination
>>thereof).
>> >
>> > Yes, for the 3PSM, you would have to either ignore the
>>dependencies or select
>> > one estimate per study (and maybe repeat the latter a
>>large number of times
>> for
>> > different subsets).
>> >
>> > I assume you are familiar with these papers. If not,
>>they are directly
>> relevant:
>> >
>> > Rodgers, M. A., & Pustejovsky, J. E. (2021).
>>Evaluating meta-analytic methods
>> to
>> > detect selective reporting in the presence of
>>dependent effect sizes.
>> > Psychological Methods, 26(2), 141-160.
>>https://doi.org/10.1037/met0000300
>> >
>> > Fernández-Castilla, B., Declercq, L., Jamshidi, L.,
>>Beretvas, S. N., Onghena,
>> > P., & Van den Noortgate, W. (2021). Detecting
>>selection bias in meta-analyses
>> > with multiple outcomes: A simulation study. The
>>Journal of Experimental
>> > Education, 89(1), 125-144.
>>https://doi.org/10.1080/00220973.2019.1582470
>> >
>> > Nakagawa, S., Lagisz, M., Jennions, M. D., Koricheva,
>>J., Noble, D. W. A.,
>> > Parker, T. H., Sánchez-Tójar, A., Yang, Y., & O'Dea,
>>R. E. (2022). Methods for
>> > testing publication bias in ecological and
>>evolutionary meta-analyses. Methods
>> > in Ecology and Evolution, 13(1), 4-21.
>>https://doi.org/10.1111/2041-210X.13724
>> >
>> > I think James is working on some methods related to
>>this topic:
>> >
>> >
>>https://jepusto.com/posts/cluster-bootstrap-selection-model/
>> >
>> > Best,
>> > Wolfgang
>> >
>> > > -----Original Message-----
>> > > From: R-sig-meta-analysis
>><mailto:r-sig-meta-analysis-bounces using r-project.org>
>> On
>> > Behalf
>> > > Of Pia-Magdalena Schmidt via R-sig-meta-analysis
>> > > Sent: Wednesday, November 20, 2024 21:58
>> > > To: mailto:r-sig-meta-analysis using r-project.org
>> > > Cc: Pia-Magdalena Schmidt
>><mailto:pia-magdalena.schmidt using uni-bonn.de>
>> > > Subject: [R-meta] Assessing selection bias /
>>multivariate meta-analysis
>> > >
>> > > Dear all,
>> > > Although this topic has been discussed several times
>>and I read the archives
>> > > and referenced papers, I’m still not sure how to
>>assess and possibly correct
>> > > for selection bias in multivariate meta-analyses.
>> > >
>> > > I used the metafor package and ran meta-analyses
>>with SMCC as effect size
>> > > (all studies used within-designs) and fitted
>>http://rma.mv models as several
>> > > studies report more than one effect size.
>>Furthermore, I used cluster-robust
>> > > methods to examine the robustness of the models.
>> > > For a subset of my data, I used meta-regressions
>>with one continuous
>> > > moderator.
>> > > All effect sizes are from published journal
>>articles. The range of included
>> > > studies is between 30 and 6 with a number of effect
>>sizes between 45 and 10.
>> > >
>> > > Since I want to take the dependencies into account,
>>I would not use funnel
>> > > plots or trim and fill. I wonder if using Egger's
>>regression test adjusted
>> > > for http://rma.mv as well as PET-PEESE and perhaps
>>the sensitivity analysis
>> > > suggested by Mathur & Vanderweele (2020) as well as
>>3PSM would be a
>> > > reasonable way to go? Although the latter would only
>>use one effect size per
>> > > study or an aggregated effect size, right?
>> > >
>> > > I would be very grateful for any recommendations!
>> > > Best,
>> > > Pia
>> > >
>> > > Below is an excerpt from my code:
>> > > ES_all <- escalc(measure="SMCC", m1i= m1i, sd1i=
>>sd1i, ni = ni, m2i= m2i,
>> > > sd2i= sd2i, pi= pi, ri = ri, data= dat)
>> > > V <- vcalc(vi=ES_all$vi, cluster=id_database, obs =
>>effect_id, rho =0.605,
>> > > data=dat)
>> > > res <- http://rma.mv(yi=ES_all$yi, V, random = ~ 1 |
>>id_database/effect_id,
>> data =
>> > > dat)
>> > > res.robust <- robust(res, cluster = id_database,
>>clubSandwich = TRUE)
>> > >
>> > > # subset
>> > > res_LOR <- http://rma.mv(yi=ES_LOR$yi, V, random = ~
>>1 |
>> id_database/effect_id,
>> > > mods = ~ dose, data = dat)
> _______________________________________________
> R-sig-meta-analysis mailing list @
>R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
More information about the R-sig-meta-analysis
mailing list