[R-meta] Aggregating dependent effect sizes for trimfill

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Tue Jul 23 22:51:30 CEST 2024


Hi Andreas,

I think the points that Lukasz raised are important. As far as I can tell,
trim and fill just does not work very well for correcting selective
reporting bias----even in the simpler context where you just have one
effect size estimate per study. The rank correlation test is doing
something very similar to Egger's regression, so I don't think it really
adds much. I think a more compelling form of sensitivity analysis would be
to use some form of selection model, such as those implemented in
metafor::selmodel().

In addition to the problem Lukasz raised about concealing heterogeneity,
another potential problem with applying selection models to aggregated data
is that the assumptions of the models would then pertain to the averaged
effect sizes rather than the effects for individual outcomes. For example,
in a three parameter selection model, we assume that non-significant effect
sizes with (p < .05) are censored at some rate, so not all of the effect
sizes that have been measured are actually reported and available for
inclusion in the meta-analysis. If you apply the model to aggregate data,
then the assumption is that censoring depends on the p-value of the
*averaged* effect size, a quantity that might not ever be reported in the
primary study, which might not be all that plausible. Likewise, if you
apply trim-and-fill to the aggregated data, then the assumption is that the
studies with most extreme *average*d effect sizes are censored---again,
perhaps not all that plausible.

An alternative to fitting selection models to aggregated data is to fit
them to the raw effect sizes (as if all effect sizes were independent) but
then to correct the standard errors using a method that does account for
the dependence structure. The appeal of this strategy is that the
assumptions of the selection model would then apply at the level of effects
for individual outcomes (i.e., selective reporting is a function of the
p-value for the treatment effect on a specific outcome measure at a
specific point in time). Megha Joshi and I describe one way to implement
this here:
https://jepusto.com/posts/cluster-bootstrap-selection-model/
We're currently working on a package that will make this workflow easier
and hopefully quicker, but it's not ready for release yet.

In addition to the above, Maya Mathur's R package publicationbias
implements some useful worst-case sensitivity analyses, and the estimation
methods are set up to handle dependent effect sizes. These sensitivity
analyses don't try to estimate the degree of selective reporting. Rather,
they answer the questions of a) the degree to which inferences about
average effect size could be influenced by very bad selective reporting or
b) the strength of selective reporting needed to invalidate one's inference
about the average effect size (such as to shrink the average effect size
estimate to zero). So they can be a nice complement to other forms of
selective reporting analysis (especially if you find that your estimates of
the degree of selection are very imprecise, which is often the case in my
experience).

James

On Tue, Jul 23, 2024 at 6:23 AM Lukasz Stasielowicz via R-sig-meta-analysis
<r-sig-meta-analysis using r-project.org> wrote:

> Dear Andreas,
>
> As indicated in the article mentioned by Michael Dewey, conducting more
> publication bias tests is not necessarily informative. Some approaches
> developed for simple meta-analytic models tend to perform poorly when
> dealing with dependent effects or even in simple meta-analytic models.
> Furthermore, caution is advised when aggregating effect sizes, as
> aggregating might mask heterogeneity. For example, an average effect of
> 0 might mask the fact that there are both positive and negative effects
> and no null effects in the sample.
>
> Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019).
> Correcting for Bias in Psychology: A Comparison of Meta-Analytic
> Methods. Advances in Methods and Practices in Psychological Science,
> 2(2), 115–144. https://doi.org/10.1177/2515245919847196
>
> Renkewitz, F., & Keiner, M. (2019). How to detect publication bias in
> psychological research: A comparative evaluation of six statistical
> methods. Zeitschrift Fur Psychologie, 227(4), 261–279.
> https://doi.org/10.1027/2151-2604/a000386
>
>
>
>
> Since you mentioned that you are using the inverse standard error as a
> moderator, you might also want to take a look at the following
> recommendations to account for the fact that the standard errors are
> related to the effect size:
>
> Pustejovsky, J. E., & Rodgers, M. A. (2019). Testing for funnel plot
> asymmetry of standardized mean differences. Research Synthesis Methods,
> 10(1), 57–71. https://doi.org/10.1002/jrsm.1332
>
>
> Best,
> --
> Lukasz Stasielowicz
> Osnabrück University
> Institute for Psychology
> Research methods, psychological assessment, and evaluation
> Lise-Meitner-Straße 3
> 49076 Osnabrück (Germany)
> Twitter: https://twitter.com/l_stasielowicz
> Tel.: +49 541 969-7735
>
> On 23.07.2024 12:00, r-sig-meta-analysis-request using r-project.org wrote:
> > Send R-sig-meta-analysis mailing list submissions to
> >       r-sig-meta-analysis using r-project.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> >       https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> > or, via email, send a message with subject or body 'help' to
> >       r-sig-meta-analysis-request using r-project.org
> >
> > You can reach the person managing the list at
> >       r-sig-meta-analysis-owner using r-project.org
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of R-sig-meta-analysis digest..."
> >
> >
> > Today's Topics:
> >
> >     1. Aggregating dependent effect sizes for trimfill (Andreas Voldstad)
> >     2. Re: Aggregating dependent effect sizes for trimfill
> >        (Michael Dewey)
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Tue, 23 Jul 2024 08:59:05 +0000
> > From: Andreas Voldstad <andreas.voldstad using kellogg.ox.ac.uk>
> > To: Andreas Voldstad via R-sig-meta-analysis
> >       <r-sig-meta-analysis using r-project.org>
> > Subject: [R-meta] Aggregating dependent effect sizes for trimfill
> > Message-ID:
> >       <
> LO4P265MB350166966257359B9F71003BEDA92 using LO4P265MB3501.GBRP265.PROD.OUTLOOK.COM
> >
> >
> > Content-Type: text/plain; charset="utf-8"
> >
> > Dear Wolfgang, James and all,
> >
> > I am doing a multilevel meta-analysis of SMDs, with partially empirical
> correlated and hierarchical effects ("PECHE"), corrected with
> cluster-robust variance estimation.
> >
> > For assessment of publication bias risk, I have done Egger's regression
> by standardising the effect sizes and adding the inverse of their standard
> error as a moderator.
> >
> > I would like to add some of the methods that are not compatible with
> dependent effect sizes, such as trim and fill, rank correlation test and
> perhaps stepwise models.
> >
> > For visualisation, I have already aggregated the data based on this
> post:
> https://www.metafor-project.org/doku.php/tips:forest_plot_with_aggregated_values
> >
> > And confirmed that running the rma.uni with REML on the aggregated data,
> and then applying RVE, yields practically the same results to the original
> multilevel model (i.e., up to .01 difference in the 95% CI).
> >
> > I am wondering what you think in general about applying methods not
> suitable for rma.mv models, such as trimfill and ranktest, to this
> aggregated data (and the corresponding aggregated funnel plot)?
> >
> > I performed rma.uni on the aggregated data, and passed it on to trimfill
> to get k0, a filled funnel plot, and a corrected effect.
> >
> > If this is a valid approach, I am also wondering if there is a way to
> apply robust() to the trimfill corrected effect, so that it will be
> comparable to the effect from my original analysis?
> >
> >
> > Best wishes,
> >
> > Andreas Voldstad (he/him)
> > PhD student in Psychiatry
> > University of Oxford
> > Please don�t feel obliged to read or respond to my email outside your
> own working hours.
> >
> >       [[alternative HTML version deleted]]
> >
> >
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Tue, 23 Jul 2024 10:14:19 +0100
> > From: Michael Dewey <lists using dewey.myzen.co.uk>
> > To: R Special Interest Group for Meta-Analysis
> >       <r-sig-meta-analysis using r-project.org>
> > Subject: Re: [R-meta] Aggregating dependent effect sizes for trimfill
> > Message-ID: <0bcaa7e0-1a42-b32d-100d-9135375ee719 using dewey.myzen.co.uk>
> > Content-Type: text/plain; charset="utf-8"; Format="flowed"
> >
> > Dear Andreas
> >
> > You might be interested in some work James and a co-author have
> > published in this area.
> >
> > https://psycnet.apa.org/doi/10.1037/met0000300
> >
> > No doubt when the sun rises over the new world he will chip in.
> >
> > Michael
> >
> > On 23/07/2024 09:59, Andreas Voldstad via R-sig-meta-analysis wrote:
> >> Dear Wolfgang, James and all,
> >>
> >> I am doing a multilevel meta-analysis of SMDs, with partially empirical
> correlated and hierarchical effects ("PECHE"), corrected with
> cluster-robust variance estimation.
> >>
> >> For assessment of publication bias risk, I have done Egger's regression
> by standardising the effect sizes and adding the inverse of their standard
> error as a moderator.
> >>
> >> I would like to add some of the methods that are not compatible with
> dependent effect sizes, such as trim and fill, rank correlation test and
> perhaps stepwise models.
> >>
> >> For visualisation, I have already aggregated the data based on this
> post:
> https://www.metafor-project.org/doku.php/tips:forest_plot_with_aggregated_values
> >>
> >> And confirmed that running the rma.uni with REML on the aggregated
> data, and then applying RVE, yields practically the same results to the
> original multilevel model (i.e., up to .01 difference in the 95% CI).
> >>
> >> I am wondering what you think in general about applying methods not
> suitable for rma.mv models, such as trimfill and ranktest, to this
> aggregated data (and the corresponding aggregated funnel plot)?
> >>
> >> I performed rma.uni on the aggregated data, and passed it on to
> trimfill to get k0, a filled funnel plot, and a corrected effect.
> >>
> >> If this is a valid approach, I am also wondering if there is a way to
> apply robust() to the trimfill corrected effect, so that it will be
> comparable to the effect from my original analysis?
> >>
> >>
> >> Best wishes,
> >>
> >> Andreas Voldstad (he/him)
> >> PhD student in Psychiatry
> >> University of Oxford
> >> Please don�t feel obliged to read or respond to my email outside your
> own working hours.
> >>
> >>      [[alternative HTML version deleted]]
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> >> To manage your subscription to this mailing list, go to:
> >> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> >
>
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list