[R-meta] Does trim and fill method correct for data falsification or lower quality of small studies?

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue May 3 09:15:00 CEST 2022


Dear Ali,

Please see my responses below.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of towhidi
>Sent: Tuesday, 03 May, 2022 1:37
>To: r sig meta-analysis list
>Subject: [R-meta] Does trim and fill method correct for data falsification or
>lower quality of small studies?
>
>Dear all,
>
>The asymmetry in a funnel plot can be caused by factors other than
>publication bias, such as data falsification or poorer quality in
>smaller trials. 

... or unaccounted for moderators (that are correlated with study size) or more generally heterogeneity.

>However, the Cochrane Handbook mentions that "the trim
>and fill method does not take into account reasons for funnel plot
>asymmetry other than publication bias".

I searched through the handbook (https://training.cochrane.org/handbook/current) and couldn't find this quote. Where did you find this?

>I do not understand why it cannot account for data falsification or poor
>quality of small trials, assuming that these characteristics are
>associated with study size. For data falsification, the true observed
>effect size (before the fraudulent change in the data) for these studies
>converges on the true underlying effect size. But the falsified data
>move these data points to the right side, and, using the trim and fill
>method, this bias is neutralized by imputing their counterparts on the
>other side. 

'Neutralized' sounds a bit too optimistic. If a study is imputed corresponding to the fraudulent study (which isn't guaranteed depending on how the funnel looks in general), it is going to be placed at 'est - delta', where 'est' is the pooled estimate at the end of the trim and fill procedure and 'delta' is the distance between est and the fraudulent study. If 'est' is larger than 0, then this would still leads to some bias, but it should indeed be reduced.

>Of course, the confidence intervals will be biased, because
>we are imputing data points that do not exist (which narrows the CI) and
>that the bias arose from data falsification or low quality has added to
>the estimated sampling variance (which widens the CI). Also, it changes
>the weights, especially in the random-effects model.
>
>But, isn't the point estimate a corrected estimate, assuming that data
>falsification has caused the asymmetry?

I would say yes. A simulation study that has examined the properties of various methods not only when there is publication bias but also under the use of 'questionable research practices' is:

Carter, E. C., Schönbrodt, F. D., Gervais, W. M. & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2(2), 115-144. https://doi.org/10.1177/2515245919847196 

>The same argument may apply to the bias that arises from low-quality
>studies. However, if this is correct, I think that acknowledging this
>and interpreting the CIs with even more caution is more logical than
>assuming that the asymmetry is caused solely by publication bias and
>that misconduct and low quality of small studies have nothing to do with
>it.
>
>Is this correct? Or I am missing something?
>
>Thank you.
>
>--
>Ali Zia-Tohidi MSc
>Clinical Psychology
>University of Tehran



More information about the R-sig-meta-analysis mailing list