[R-meta] Can traditional publication bias modelling approaches work properly with meta-analyses of proportions?

Gerta Ruecker ruecker at imbi.uni-freiburg.de
Mon Oct 9 19:59:30 CEST 2017


Dear Naike,

As this is not a question about R, you might want to join the mailing 
list of the Cochrane Statistics Methods Group, 
http://lists.cochrane.org/mailman/listinfo/smglist and post your 
question there, too. (By the way, I agree with what you say.)

Gerta Rücker


-- 

Dr. rer. nat. Gerta Rücker, Dipl.-Math.

Medical Faculty and Medical Center - University of Freiburg
Institute for Medical Biometry and Statistics

Stefan-Meier-Strasse 26, D-79104 Freiburg, Germany

Phone +49 (0)761 2036673
Fax   +49 (0)761 2036680

Mailruecker at imbi.uni-freiburg.de
Webwww.imbi.uni-freiburg.de/biom/


Am 09.10.2017 um 17:20 schrieb Naike Wang:
> Hi all,
> I have a question about the use of publication bias modeling approaches in
> meta-analyses of proportions.
> The traditional approaches of assessing publication bias, such as the rank
> correlation test, Egger’s regression model, and weight function approaches
> have all assumed that the likelihood of a study getting published depends
> on its sample size and statistical significance (Coburn and Vevea, 2015).
> Although it has been confirmed by empirical research that statistical
> significance plays a dominant role in publication (Preston et al., 2004),
> this is not entirely the case. Cooper et al. (1997) have demonstrated that
> the decision as to whether to publish a study is influenced by a variety of
> criteria created by journal editors regardless of methodological quality
> and significance, including but not limited to, the source of funding for
> research, social preferences at the time when research is conducted, etc.
> Obviously,the traditional methods fail to capture the full complexity of
> the selection process.
> In practice, authors of meta-analyses of proportions have employed these
> methods in an attempt to detect publication bias. But, studies included in
> meta-analyses of proportions are non-comparative, thus there are no
> “negative” or “undesirable” results or study characteristics like
> significant levels that may have biased publications (Maulik et al., 2011).
> Therefore, in my opinion, these traditional methods may not be able to
> fully explain the asymmetric distribution of effect sizes on funnel plots.
> It is also possible that they may fail to identify publication bias in
> meta-analyses of proportions in that publication bias in non-comparative
> studies may arise for reasons other than significance.
> I'm not sure if my reasoning is correct. What do you think? Can the
> traditional methods work properly with observational meta-analyses? If
> someone could point me to some papers regarding this topic, that'd be
> wonderful.
> Thank you!
>
> Naike
>
> References:
> Coburn, K. M., & Vevea, J. L. (2015). Publication bias as a function of
> study characteristics. *Psychological methods*, *20*(3), 310.
>
> Cooper, H., DeNeve, K., & Charlton, K. (1997). Finding the missing science:
> The fate of studies submitted for review by a human subjects
> committee. *Psychological
> Methods*, *2*(4), 447.
>
> Preston, C., Ashby, D., & Smyth, R. (2004). Adjusting for publication bias:
> modelling the selection process. *Journal of Evaluation in Clinical
> Practice*, *10*(2), 313-322.
>
> Maulik, P. K., Mascarenhas, M. N., Mathers, C. D., Dua, T., & Saxena, S.
> (2011). Prevalence of intellectual disability: a meta-analysis of
> population-based studies. *Research in developmental disabilities*, *32*(2),
> 419-436.
>
> 	[[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis


	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list