[R-meta] Weighting studies combining inverse variance and quality score in multiple treatment studies
Viechtbauer Wolfgang (SP)
wolfgang.viechtbauer at maastrichtuniversity.nl
Sat Jan 20 10:48:29 CET 2018
Besides the issue whether one should use quality weights at all, I just want to mention that it is somewhat misleading to just look at the diagonal of the weight matrix in models where the weight matrix is no longer diagonal -- which is the case here. Try:
to see the entire weight matrix, not just the diagonal. Hence, it is a bit of an oversimplification to say that study A gets x% of the weight, since off-diagonal elements also have an influence.
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
>project.org] On Behalf Of Gerta Ruecker
>Sent: Friday, 19 January, 2018 18:18
>To: r-sig-meta-analysis at r-project.org
>Subject: Re: [R-meta] Weighting studies combining inverse variance and
>quality score in multiple treatment studies
>My response is only to the second question (weighting using quality
>This is strongly discouraged at least by Cochrane. It is a matter of the
>difference between the multidimensional concept of quality and the
>unidimensional concept of bias (which we want to avoid). Not every item
>of quality leads to bias, and if it does, the direction of bias might be
>unclear. Biases may cancel out. Moreover, to downweight a biased study
>still leads to bias, thus you might as well throw the study out.
>See the Cochrane Handbook http://handbook-5-1.cochrane.org/ ,
>8.3 Tools for assessing quality and risk of bias
>8.3.1 Types of tools
>8.3.2 Reporting versus conduct
>8.3.3 Quality scales and Cochrane reviews
>8.3.4 Collecting information for assessments of risk of bias
>8.15.2 Assessing risk of bias from other sources
>Sander Greenland and Keith O'Rourke, "On the bias produced by quality
>scores in meta-analysis, and a hierarchical view of proposed solutions",
>Biostatistics., vol. 2, pp. 463-471, 2001.
>Peter Jüni, Douglas G. Altman, and Matthias Egger, "Assessing the
>quality of controlled clinical trials", Brit. Med. J., vol. 323, pp.
>A. R. Jadad, D. J. Cook, A. Jones, T. P. Klassen, P. Tugwell, M. Moher,
>and D. Moher, "Methodology and reports of systematic reviews and
>meta-analyses: A comparison of Cochrane reviews with articles published
>in paper-based journals.", J. Amer. Med. Assoc., vol. 280, pp. 278-80,
>Emerson JD, Burdick E, Hoaglin DC, Mosteller F, Chalmers TC. An
>empirical study of the possible relation of treatment differences to
>quality scores in controlled randomized clinical trials. Controlled
>Clinical Trials 1990; 11: 339-352.
>Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias.
>Dimensions of methodological quality associated with estimates of
>treatment effects in controlled trials. JAMA 1995; 273: 408-412.
>Am 19.01.2018 um 18:04 schrieb Vivien Bonnesoeur:
>> Dear all,
>> I would need some advice in the way to combine quality score and
>> variance for weighting studies.
>> I'm contrasting the infiltration rate between tree plantation and
>> and also tree plantation and native forest (effect size = Log ROM) to
>> if tree plantation on grassland can increase the infiltration and
>> to level of infiltration of native forest.
>> here is the raw data :
>> If I just used the inverse variance-covariance weighting (to account
>> dependency between the reuse of some plantations) :
>> model1 = rma.mv
>> I end with a lot of weight to the studies where there is a reuse of the
>> plantation. Actually, those weight are really different from the
>> vi. For example
>> Gaitan2016 : weight(model1) = 2.98% ; weight from inverse vi = 7.8%
>> Hoyos2005.1 :weight(model1) = 9.01% ; weight from inverse vi = 2.6%
>> Hoyos2005.2 :weight(model1) = 7.67% ; weight from inverse vi = 0.66%
>> Zimmerman2007.1 :weight(model1) = 13.6% ; weight from inverse vi =
>> Zimmerman2007.2 :weight(model1) = 13.9% ; weight from inverse vi =
>> Here I have a first question :
>> -is there a way to reduce the weight of studies where the plantation is
>> reused for contrasting with 2 different control? It seems to be an
>> artificial over-weighting decision to me?
>> Besides, some studies with a low quality score have stronger weights
>> studies with high quality score. To combine the quality score and the
>> inverse variance in study weighting, my try is to use the weight from
>> model1 and to multiply it with the quality score in this way :
>> It gives more satisfactory weigths since the studies with very low
>> score have now a small contribution to the grand mean.
>> I would like to know however if this way of combining the quality and
>> inverse variance weighting is sound theoretically and won't be rejected
>> reviewer as a "critical flaw"
>> Best regards
More information about the R-sig-meta-analysis