[R-meta] methods for assessing publication bias while accounting for dependency

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Feb 28 14:31:46 CET 2022


Dear Brendan,

Using the 'regression method' approach could also be regarded as a form of sensitivity analysis, when focusing on the model intercept as an estimate of the 'adjusted' effect (as in the PET/PEESE methods). In fact, if I recall the findings from various simulation studies, this seems to work better than the trim and fill method.

One can also aggregate the estimates to the study level (or to whatever level needed so that the resulting aggregated values can be assumed to be independent) and then run methods that assume independence on these aggregated data (including trim and fill).

Another recent method by James Pustejovsky: https://www.jepusto.com/talk/stanford-qsu-2022-selective-reporting/

Some other relevant readings:

Fernández-Castilla, B., Declercq, L., Jamshidi, L., Beretvas, S. N., Onghena, P. & Van den Noortgate, W. (2021). Detecting selection bias in meta-analyses with multiple outcomes: A simulation study. The Journal of Experimental Education, 89(1), 125-144. https://doi.org/10.1080/00220973.2019.1582470 

Rodgers, M. A. & Pustejovsky, J. E. (2021). Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes. Psychological Methods, 26(2), 141-160. https://doi.org/10.1037/met0000300 

P.S.: Please use meaningful post titles to make the mailing list archives more useful.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Brendan Hutchinson
>Sent: Friday, 25 February, 2022 14:15
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] (no subject)
>
>Dear mailing list,
>
>I have a couple of minor questions regarding methods for assessing publication
>bias while accounting for dependency.
>
>To my understanding, there is no means of running a publication bias analysis,
>such as trim and fill, with a multilevel meta-analytic model in R (or a model in
>which dependency issues need be accounted for). I am aware that one can use a
>regression method, such as regressing the standard error onto the summary
>estimate, within a multi-level model (this is fairly straightforward using
>rma.mv(), for example). However, what about methods for assessing the robustness
>of findings, if publication bias is a concern (such as trim and fill), while also
>accounting for dependency?
>
>The best I have found is a recent package "PublicationBias" by Mathur and
>VanderWeele (10.1111/rssc.12440).
>
>I am wondering if anyone has any recommendations for particular methods, R
>packages, or readings?
>
>Thanks so much!
>
>Brendan



More information about the R-sig-meta-analysis mailing list