[R-meta] Weighting studies combining inverse variance and quality score in multiple treatment studies

Viechtbauer Wolfgang (SP) wolfgang.viechtbauer at maastrichtuniversity.nl
Sat Jan 20 10:48:29 CET 2018


Hi Vivien,

Besides the issue whether one should use quality weights at all, I just want to mention that it is somewhat misleading to just look at the diagonal of the weight matrix in models where the weight matrix is no longer diagonal -- which is the case here. Try:

weights(model1, type="matrix")

to see the entire weight matrix, not just the diagonal. Hence, it is a bit of an oversimplification to say that study A gets x% of the weight, since off-diagonal elements also have an influence.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
>project.org] On Behalf Of Gerta Ruecker
>Sent: Friday, 19 January, 2018 18:18
>To: r-sig-meta-analysis at r-project.org
>Subject: Re: [R-meta] Weighting studies combining inverse variance and
>quality score in multiple treatment studies
>
>Dear Vivien,
>
>My response is only to the second question (weighting using quality
>scores):
>
>This is strongly discouraged at least by Cochrane. It is a matter of the
>difference between the multidimensional concept of quality and the
>unidimensional concept of bias (which we want to avoid). Not every item
>of quality leads to bias, and if it does, the direction of bias might be
>unclear. Biases may cancel out. Moreover, to downweight a biased study
>still leads to bias, thus you might as well throw the study out.
>
>See the Cochrane Handbook http://handbook-5-1.cochrane.org/ ,
>particularly
>
>8.3  Tools for assessing quality and risk of bias
>8.3.1 Types of tools
>8.3.2 Reporting versus conduct
>8.3.3 Quality scales and Cochrane reviews
>8.3.4 Collecting information for assessments of risk of bias
>8.15.2  Assessing risk of bias from other sources
>
>References:
>
>Sander Greenland and Keith O'Rourke, "On the bias produced by quality
>scores in meta-analysis, and a hierarchical view of proposed solutions",
>Biostatistics., vol. 2, pp. 463-471, 2001.
>
>Peter Jüni, Douglas G. Altman, and Matthias Egger, "Assessing the
>quality of controlled clinical trials", Brit. Med. J., vol. 323, pp.
>42-46, 2001.
>
>A. R. Jadad, D. J. Cook, A. Jones, T. P. Klassen, P. Tugwell, M. Moher,
>and D. Moher, "Methodology and reports of systematic reviews and
>meta-analyses: A comparison of Cochrane reviews with articles published
>in paper-based journals.", J. Amer. Med. Assoc., vol. 280, pp. 278-80,
>1998.
>
>Emerson JD, Burdick E, Hoaglin DC, Mosteller F, Chalmers TC. An
>empirical study of the possible relation of treatment differences to
>quality scores in controlled randomized clinical trials. Controlled
>Clinical Trials 1990; 11: 339-352.
>
>Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias.
>Dimensions of methodological quality associated with estimates of
>treatment effects in controlled trials. JAMA 1995; 273: 408-412.
>
>Best,
>
>Gerta
>
>Am 19.01.2018 um 18:04 schrieb Vivien Bonnesoeur:
>> Dear all,
>> I would need some advice in the way to combine quality score and
>inverse
>> variance for weighting studies.
>> I'm contrasting the infiltration rate between tree plantation and
>grassland
>> and also tree plantation and native forest (effect size = Log ROM) to
>know
>> if tree plantation on grassland can increase the infiltration and
>recover
>> to level of infiltration of native forest.
>>
>> here is the raw data :
>> article;trial;Land-
>use_change;Plantation_N;Plantation_mean;Plantation_sd;Control_N;Control_m
>ean;Control_sd;yi;vi;quality_score
>> Gaitan2016;1;Plantation-grassland;32;36;41;32;11;7;1.186;0.053;1
>> Gonzalez2015;2;Plantation-
>grassland;9;76.6;17.6;2;76.6;37.5;0.000;0.126;0.8
>> Hoyos2005;3;Plantation-grassland;3;101.3;66;23;2.5;1.6;3.702;0.159;0.5
>> Hoyos2005;4;Plantation-Native_forest;3;101.3;66;3;225;271;-
>0.798;0.625;0.5
>> Moreno2012;5;Plantation-
>grassland;10;8064;7092;10;5004;7092;0.477;0.278;0.3
>> Moreno2012;6;Plantation-Native_forest;10;8064;7092;10;34092;7092;-
>1.442;0.082;0.3
>> Sadeghian2001;7;Plantation-grassland;12;210;120;16;30;27;1.946;0.078;1
>> Sadeghian2001;8;Plantation-Native_forest;12;210;120;16;760;439;-
>1.286;0.048;1
>> Zimmerman2007;9;Plantation-grassland;30;514;137;30;3;4;5.144;0.062;0.8
>> Zimmerman2007;10;Plantation-
>Native_forest;30;514;137;30;135;51;1.337;0.007;0.6
>>
>> If I just used the inverse variance-covariance weighting (to account
>for
>> dependency between the reuse of some plantations) :
>>
>> model1 = rma.mv
>> (yi,V,mods=~Land-use_change-
>1,method="REML",slab=article,random=~factor(trial)|article,data=ma.infilt
>)
>>
>> I end with a lot of weight to the studies where there is a reuse of the
>> plantation. Actually, those weight are really different from the
>inverse
>> vi. For example
>> Gaitan2016 : weight(model1) = 2.98%  ;  weight from inverse vi = 7.8%
>> Hoyos2005.1 :weight(model1) = 9.01%  ;  weight from inverse vi = 2.6%
>> Hoyos2005.2 :weight(model1) = 7.67%  ;  weight from inverse vi = 0.66%
>> Zimmerman2007.1 :weight(model1) = 13.6%  ;  weight from inverse vi =
>6.7%
>> Zimmerman2007.2 :weight(model1) = 13.9%  ;  weight from inverse vi =
>58%
>>
>> Here I have a first question :
>> -is there a way to reduce the weight of studies where the plantation is
>> reused for contrasting with 2 different control? It seems to be an
>> artificial over-weighting decision to me?
>>
>> Besides, some studies with a low quality score have stronger weights
>than
>> studies with high quality score. To combine the quality score and the
>> inverse variance in study weighting, my try is to use the weight from
>the
>> model1 and to multiply it with the quality score in this way :
>>
>> model2=rma.mv
>> (yi,V,mods=~Land-use_change-
>1,W=(ma.infilt$quality_score*weights(model1))/sum(ma.infilt$quality_score
>*weights(model1)),method="REML",slab=article,random=~factor(trial)|articl
>e,data=ma.infilt)
>>
>> It gives more satisfactory weigths since the studies with very low
>quality
>> score have now a small contribution to the grand mean.
>> I would like to know however if this way of combining the quality and
>> inverse variance weighting is sound theoretically and won't be rejected
>by
>> reviewer as a "critical flaw"
>>
>> Best regards


More information about the R-sig-meta-analysis mailing list