[R-meta] Preregistering publication bias analysis
Michael Dewey
||@t@ @end|ng |rom dewey@myzen@co@uk
Tue Dec 22 18:41:33 CET 2020
Dear Lena
Comment in-line
On 22/12/2020 17:10, Lena Schäfer wrote:
> Hello everyone,
>
> We are looking for advice on preregistering publication bias analysis for a meta-analysis. Our data set consists of 187 effect sizes nested in 53 studies and we will account for the statistical dependency using robumeta. Forty of the 53 studies are published. To fulfill the assumption of statistical independence required for most publication bias analysis, we will randomly sample one effect size from each study, conduct the publication bias evaluation test on the set of 40 independent effect sizes, repeat the procedure 1000 times, and report the median as well as a histogram of the full distribution as an indicator of publication bias.
>
> We initially planned to use the following procedures to assess publication bias:
>
> Regression model with publication status (published vs unpublished) as a moderator
> Vevea and Hedges’ (1995) three-parameter model with a one-sided cut-off parameter at p < .05 (assumes that authors selectively published significantly positive effects)
> Funnel-plot based methods
> visual inspection of funnel plots (Light & Pillemer, 2009)
> Egger’s test of funnel plot asymmetry (Egger et al., 1997)
> trim-and-fill procedure(Duval & Tweedie, 2000)
>
> Given the superiority of Vevea and Hedges’ three-parameter model (1995) over funnel-plot based approaches (Lau et al., 2006; McShane et al., 2016), especially when there is high heterogeneity, we planned trust the conclusions of the former one in the case of inconsistency between the conclusions of different methods for detecting publication bias.
>
> However, if we 'pre-commit’ to Vevea and Hedges’ three-parameter model (1995), does it even make sense to run the remaining analyses?
>
I think the underlying principle of pre-registration is that you commit
to one of anything (outcome, analysis technology, ...) and then list any
others as secondary outcomes or sensitivity analyses. However if one
technology dominates all the others then it is hard to see it needing a
sensitivity analysis.
> Finally, is it justifiable to estimate we Vevea and Hedges’ three-parameter model (1995) on a data set consisting of 40 studies? If not, what would be a good alternative (e.g., Vevea and Woods, 2005)?
Sorry, that is a bit outside my area of expertise but others may have
opinions.
Michael
>
> We are basically looking for ’state-of-the-art’ guidelines for pre-registering publication bias analysis for a relatively small sample size of nested data. Please let me know if you need any further information!
>
> Thank you so much for your thoughts in advance!
>
> Best wishes,
> Lena
>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
--
Michael
http://www.dewey.myzen.co.uk/home.html
More information about the R-sig-meta-analysis
mailing list