[R-meta] Publication bias/sensitivity analysis in multivariate meta-analysis

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Jun 15 14:00:03 CEST 2020


This reminds me a bit about the magnesium treatment meta-analysis where the ISIS-4 "mega-trial" ended up showing essentially a null effect while the collection of smaller studies beforehand showed a beneficial effect. The example was also used by Matthias Egger for illustrating the idea behind the regression test:

Egger, M., & Davey Smith, G. (1995). Misleading meta-analysis: Lessons from “an effective, safe, simple” intervention that wasn't. British Medical Journal, 310, 752–754.

Egger, M., Davey Smith, G., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. British Medical Journal, 315(7109), 629-634.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On Behalf Of Michael Dewey
>Sent: Monday, 15 June, 2020 12:44
>To: Gerta Ruecker; Norman DAURELLE
>Cc: r-sig-meta-analysis using r-project.org; Huang Wu
>Subject: Re: [R-meta] Publication bias/sensitivity analysis in multivariate
>meta-analysis
>
>Just to add to Gerta's comprehensive reply.
>
>One IPD analysis in which I was involved had a number of small studies
>which were broadly positive and one large one which was effectively
>null. The investigators were convinced that they were very unlikely to
>have missed any other studies and the most likely explanation for the
>small study effect was that the small studies were conducted by
>enthusiasts for the new therapy who often delivered it themselves
>whereas the large study involved many therapists scattered over the
>country who were more likely to represent how it would actually work if
>rolled out. I suspect similar things often happen for complex interventions.
>
>Michael
>
>On 15/06/2020 10:19, Gerta Ruecker wrote:
>> Dear Norman, dear all,
>>
>> To clarify the notions:
>>
>> Small-study effects: All effects manifesting themselves as small studies
>> having different effects from large studies. The notion was coined by
>> Sterne et al. (Sterne, J. A. C., Gavaghan, D., and Egger, M. (2000).
>> Publication and related bias in meta-analysis: Power of statistical
>> tests and prevalence in the literature.
>> Journal of Clinical Epidemiology, 53:1119–1129.) Small-study effects are
>> seen in a funnel plot as asymmetry.
>>
>> Reasons for small-study effects may be: Heterogeneity, e.g., small
>> studies have selected patients (for example, worse health status);
>> publication bias (see below), mathematical artifacts for binary data
>> (Schwarzer, G., Antes, G., and Schumacher, M. (2002). Inflation of type
>> I error rate in two statistical tests for the detection of publication
>> bias in meta-analyses with binary outcomes. Statistics in Medicine,
>> 21:2465–2477), or coincidence.
>>
>> Publication bias is one possible reason of small-study effects and means
>> that small studies with small, no, or undesired effects are not
>> published and therefore not found in the literature. The result is an
>> effect estimate that is biased towards large effects.
>>
>> Sensitivity analysis is a possibility to investigate small-study
>> effects. There is an abundance of literature and methods how to do this.
>> Well-known models are selection models, e.g. Vevea, J. L. and Hedges, L.
>> V. (1995). A general linear model for estimating effect size in the
>> presence of publication bias. Psychometrika, 60:419–435 or Copas, J. and
>> Shi, J. Q. (2000). Meta-analysis, funnel plots and sensitivity analysis.
>> Biostatistics, 1:247–262.
>>
>> I attach a talk with more details.
>>
>> Best,
>>
>> Gerta
>>
>> Am 15.06.2020 um 02:28 schrieb Norman DAURELLE:
>>> Hi all, I read this thread, and the topic interests me, but I didn't
>>> quite understand your answer :when you say " Publication bias is a
>>> subset of small study effects where you know the
>>> aetiology of the small study effects. If you do not then it is safer to
>>> refer to small study effects. "
>>> I don't really understand what you mean.I thought publication bias
>>> meant that the studies included in a sample of study didn't really
>>> account for the whole range of possible effect sizes (with their
>>> associated standard error).Is that not what publication bias refers to
>>> ? And if it is, how does it also correspond to the definition you gave
>>> ?Thank you !Norman.


More information about the R-sig-meta-analysis mailing list