[R-meta] Preregistering publication bias analysis
James Pustejovsky
jepu@to @end|ng |rom gm@||@com
Thu Dec 31 05:16:58 CET 2020
Hi Lena,
In addition to Michael's comments, I'll offer a couple of notes.
First, I do see the argument that, due to superior performance of the
Hedges-Vevea 3-parameter selection model (3PSM), it seems odd to even
report anything else. However, I think there is still some value in
reporting an Egger regression test (as a secondary analysis) because it
addresses a different question than the 3PSM. The 3PSM is premised on the
assumption that selective publication happens based on a specific p-value
threshold, with studies under that threshold being more likely to be
published than studies that have p-values above the threshold. In contrast,
Egger's test is looking much more broadly at whether there is any sort of
asymmetry in the funnel plot. If it is significant, it indicates that there
is *something* amiss--that less precise studies tend to have different
average effect sizes than more precise studies--but this could be due to a
variety of factors, including selective publication practices but not
exclusively so. Since the two analyses provide different pieces of
information, it seems to me that it's useful to report both.
Second, it's possible to do Egger's regression within the RVE framework, so
that you don't need to bother with sub-sampling effect sizes within
studies. Details here:
https://osf.io/preprints/metaarxiv/vqp8u/
Third, I would expect that 40 studies should be adequate for fitting the
3PSM. The problems with 3PSM tend to occur when you have only a very small
proportion of effect sizes that are not statistically significant. In the
extreme, if there are zero non-significant effects, maximum likelihood
estimation fails to return an estimate of the probability of observing a
non-significant effect. Similar things happen in the slightly less extreme
situation where the data include only a few (say 2-3) non-significant
studies. In the paper linked above, we implemented a rather ad hoc fix for
this, where we adjusted the p-value threshold so that at least three effect
size estimates fell below the threshold and at least three fell above the
threshold. Details are on p. 3 of the supplementary materials here:
https://osf.io/qzdcg/
Fourth, a further bit that would complement the 3PSM and Egger regression
test is the Mathur & Vanderweele (2020) publication bias sensitivity
analysis described in this paper:
Mathur, M. B., & VanderWeele, T. J. (2020). Sensitivity analysis for
publication bias in meta‐analyses. Journal of the Royal Statistical
Society. Series C, Applied Statistics, 69(5), 1091.
An R package that implements the approach is here:
https://cran.r-project.org/web/packages/PublicationBias/index.html
Kind Regards,
James
On Tue, Dec 22, 2020 at 11:42 AM Michael Dewey <lists using dewey.myzen.co.uk>
wrote:
> Dear Lena
>
> Comment in-line
>
> On 22/12/2020 17:10, Lena Schäfer wrote:
> > Hello everyone,
> >
> > We are looking for advice on preregistering publication bias analysis
> for a meta-analysis. Our data set consists of 187 effect sizes nested in 53
> studies and we will account for the statistical dependency using robumeta.
> Forty of the 53 studies are published. To fulfill the assumption of
> statistical independence required for most publication bias analysis, we
> will randomly sample one effect size from each study, conduct the
> publication bias evaluation test on the set of 40 independent effect sizes,
> repeat the procedure 1000 times, and report the median as well as a
> histogram of the full distribution as an indicator of publication bias.
> >
> > We initially planned to use the following procedures to assess
> publication bias:
> >
> > Regression model with publication status (published vs unpublished) as a
> moderator
> > Vevea and Hedges’ (1995) three-parameter model with a one-sided cut-off
> parameter at p < .05 (assumes that authors selectively published
> significantly positive effects)
> > Funnel-plot based methods
> > visual inspection of funnel plots (Light & Pillemer, 2009)
> > Egger’s test of funnel plot asymmetry (Egger et al., 1997)
> > trim-and-fill procedure(Duval & Tweedie, 2000)
> >
> > Given the superiority of Vevea and Hedges’ three-parameter model (1995)
> over funnel-plot based approaches (Lau et al., 2006; McShane et al., 2016),
> especially when there is high heterogeneity, we planned trust the
> conclusions of the former one in the case of inconsistency between the
> conclusions of different methods for detecting publication bias.
> >
> > However, if we 'pre-commit’ to Vevea and Hedges’ three-parameter model
> (1995), does it even make sense to run the remaining analyses?
> >
>
> I think the underlying principle of pre-registration is that you commit
> to one of anything (outcome, analysis technology, ...) and then list any
> others as secondary outcomes or sensitivity analyses. However if one
> technology dominates all the others then it is hard to see it needing a
> sensitivity analysis.
>
>
> > Finally, is it justifiable to estimate we Vevea and Hedges’
> three-parameter model (1995) on a data set consisting of 40 studies? If
> not, what would be a good alternative (e.g., Vevea and Woods, 2005)?
>
> Sorry, that is a bit outside my area of expertise but others may have
> opinions.
>
> Michael
>
> >
> > We are basically looking for ’state-of-the-art’ guidelines for
> pre-registering publication bias analysis for a relatively small sample
> size of nested data. Please let me know if you need any further information!
> >
> > Thank you so much for your thoughts in advance!
> >
> > Best wishes,
> > Lena
> >
> >
> > [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-meta-analysis mailing list
> > R-sig-meta-analysis using r-project.org
> > https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> >
>
> --
> Michael
> http://www.dewey.myzen.co.uk/home.html
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list