# [R-meta] Implementation of the Inverse variance heterogeneity model

Viechtbauer Wolfgang (SP) wolfgang.viechtbauer at maastrichtuniversity.nl
Thu Nov 30 11:09:15 CET 2017

```As far as I am concerned, discussions around the pros and cons of various methods are perfectly fine, esp. if they are directly linked to implementations in R.

So, we are considering two methods:

1) A random-effects model with the standard 1/(vi + tau^2) weights (where vi is the sampling variance of the ith study and tau^2 the (estimated) amount of variance/heterogeneity in the true outcomes)

versus

2) A random-effects model with 1/vi weights.

Under the assumptions of the RE model and in the absence of publication bias, both approaches provide an unbiased estimate of the average true outcome. Approach 1 is more efficient; in fact, using 1/(vi + tau^2) weights gives us the uniformly minimum variance unbiased estimator (UMVUE).

Sidenote: To be precise, that is only true if tau^2 would be a known quantity and not estimated (and similarly, the sampling variances must be known quantities). So, really, we are only getting an approximation to the UMVUE. The larger k (number of studies) is, the more appropriate it is to treat tau^2 as a known quantity. The larger the within-study sample sizes are, the more appropriate it is to treat the sampling variances as known quantities (but what 'large' means here depends a lot on the outcome measure used; for measures based on a variance-stabilizing transformation, even rather small within-study sample sizes will do).

Things become complicated when there is publication bias, that it, when the probability of including a study in our meta-analysis is tied to the statistical significance of the finding/outcome. In that case, large studies (with very small sampling variances) will provide less biased estimates than small studies (with very large sampling variances). Now if tau^2 is very large, then tau^2 dominates the 1/(vi + tau^2) weights, so that all studies get almost the same weight, and hence also the very small studies that are so biased. As a result, the estimate of the average true outcome will also be badly biased. If, instead, we use 1/vi weights, then the very small studies are downweighted a lot and don't screw up our estimate as much.

That is in essence what Henmi and Copas (2010) have shown. They also derived a method to compute the SE/CI for approach 2, which is implemented in the hc() function in metafor. To go back to the earlier example:

library(metafor)

dat <- get(data(dat.li2007))
dat <- dat[order(dat\$study),]
rownames(dat) <- 1:nrow(dat)
dat <- escalc(measure="OR", ai=ai, n1i=n1i, ci=ci, n2i=n2i, data=dat, subset=-c(19,21,22))

### standard RE model
res <- rma(yi, vi, data=dat, method="DL")
predict(res, transf=exp, digits=2)

### Henmi & Copas (2010) method
hc(res, transf=exp, digits=2)

### RE model with 1/vi weights ("IVhet")
res <- rma(yi, vi, data=dat, method="DL", weights=1/vi)
predict(res, transf=exp, digits=2)

Interestingly, the H&C method gives a MUCH wider CI here. Usually though, the difference between the H&C method and the "rma(yi, vi, data=dat, method="DL", weights=1/vi)" approach ("IVhet") is rather small.

I gave a talk at the 2016 meeting of the Society for Research Synthesis Methodology about this topic. The slides are here in case you are interested:

http://www.wvbauer.com/lib/exe/fetch.php/talks:2016_viechtbauer_srsm_weights.pdf

In the simulation, I also compared the H&C method with the "IVhet" approach (not shown in the slides) and found that the H&C approach did just a tad better, but not by much. A disadvantage of the H&C approach is that it doesn't generalize to models with moderators (meta-regression).

Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of dirk.richter at upd.unibe.ch
Sent: Wednesday, 29 November, 2017 22:29
To: r-sig-meta-analysis at r-project.org
Subject: Re: [R-meta] Implementation of the Inverse variance heterogeneity model

Dear Wolfgang, dear James,
many thanks to both of you for replying so quickly and for providing some valuable history lessons to an R-meta newbie.
My next question may be a bit off topic as it is more general on the pros and cons of these different approaches (am I allowed to put it here??). At first sight, the inverse sampling variance approach and those you have mentioned have an appeal to me, especially when comparing them to conventional RE and its use of rather similar weights of larger and smaller samples. The newbie that I am would like to have some guidance on these issues or at least an in-depth discussion paper. Does anybody have a recommendation?
Thanks,
Dirk

Von: James Pustejovsky [mailto:jepusto at gmail.com]
Gesendet: Mittwoch, 29. November 2017 21:02
An: Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl>
Cc: Richter, Dirk (UPD) <dirk.richter at upd.unibe.ch>; r-sig-meta-analysis at r-project.org
Betreff: Re: [R-meta] Implementation of the Inverse variance heterogeneity model

I was just typing up an email saying the same thing (and using the same example), but Wolfgang beat me to the punch! So count it as independently replicated. I would add two things:

1. An alternative to the IVhet method is to use the FE model with robust variance estimation (Sidik & Jonkman, 2006) to account for between-study heterogeneity when estimating standard errors. This can be done with the clubSandwich package (though you'll have to do the scale transformation as a post-processing step):

### standard FE model
res <- rma(yi, vi, data=dat, method="FE")
library(clubSandwich)
coef_test(res, vcov = "CR2", cluster = dat\$id)

In this example, the robust standard error is *substantially* smaller than the IVhet standard error. It also has very low degrees of freedom because of the very unequal weighting of the studies.

2. In the conventional random effects model, the Knapp-Hartung method is often recommended for testing the average treatment effect:

### standard RE model with Knapp-Hartung
res <- rma(yi, vi, data=dat, method="DL", test = "knha")
predict(res, transf=exp, digits=2)

I don't know if there is research into the relative performance of Knapp-Hartung with inverse-sampling variance weights (anybody know of work on this?), but on the face of it, it seems reasonable to generalize based on its performance under conventional RE models:

### RE model with 1/vi weights ("IVhet")
res <- rma(yi, vi, data=dat, method="DL", weights=1/vi, test = "knha")
predict(res, transf=exp, digits=2)

James

On Wed, Nov 29, 2017 at 1:42 PM, Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl<mailto:wolfgang.viechtbauer at maastrichtuniversity.nl>> wrote:
Dear Dirk,

What Doi et al. describe are RE models with different weights than the default ones.

"AMhet" uses unit weights. The possibility to fit this model was implemented in metafor since its first release in 2009. "IVhet" uses inverse sampling variance weights. The possibility to fit this model was implemented in version 1.9-3 in 2014.

Using the example from Doi et al. (2015):

##############################

library(metafor)

dat <- get(data(dat.li2007))
dat <- dat[order(dat\$study),]
rownames(dat) <- 1:nrow(dat)
dat <- escalc(measure="OR", ai=ai, n1i=n1i, ci=ci, n2i=n2i, data=dat, subset=-c(19,21,22))

### standard RE model
res <- rma(yi, vi, data=dat, method="DL")
predict(res, transf=exp, digits=2)

### RE model with 1/vi weights ("IVhet")
res <- rma(yi, vi, data=dat, method="DL", weights=1/vi)
predict(res, transf=exp, digits=2)

### RE model with unit weights ("AMhet")
res <- rma(yi, vi, data=dat, method="DL", weights=1)
predict(res, transf=exp, digits=2)

##############################

The results are exactly those reported on 135: "When the meta-analytic estimates were computed using the three methods, they were most extreme with the AMhet estimator (OR 0.44; 95% CI 0.29-0.66), less extreme with the RE estimator (OR 0.71; 95% CI 0.57-0.89) and most conservative with the IVhet estimator (OR 1.01; 95% CI 0.71-1.46)."

The idea to fit a RE model with inverse sampling variance weights was actually already described in:

Henmi, M., & Copas, J. B. (2010). Confidence intervals for random effects meta-analysis and robustness to publication bias. Statistics in Medicine, 29(29), 2969-2983.

Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org<mailto:r-sig-meta-analysis-bounces at r-project.org>] On Behalf Of dirk.richter at upd.unibe.ch<mailto:dirk.richter at upd.unibe.ch>
Sent: Wednesday, 29 November, 2017 17:22
To: r-sig-meta-analysis at r-project.org<mailto:r-sig-meta-analysis at r-project.org>
Subject: [R-meta] Implementation of the Inverse variance heterogeneity model

Dear R meta-analysis group,

I was wondering whether there are any plans to implement the Inverse variance heterogeneity model (by Doi et al., see reference below) into R MA packages or whether this has been done recently (although I couldn't find anything on the Web). While the authors of this model have provided with MetaXL a free software that allows to run such an analysis, I would be happy to have it connected to or implemented into R to have the chance to run meta-regressions based on this approach. Currently, there is a only a connection to Stata for meta-regressions.

Reference

SA Doi et al. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model. Contemp Clin Trials. 2015 Nov;45(Pt A):130-8. doi: 10.1016/j.cct.2015.05.009

Thanks,
Dirk Richter

UNIVERSITÄRE PSYCHIATRISCHE DIENSTE BERN (UPD) AG
DIREKTION PSYCHIATRISCHE REHABILITATION

Dirk Richter, Dr. phil. habil.
Leiter Forschung und Entwicklung
Murtenstrasse 46
CH-3008 Bern
Tel. +41 31 632 4707
Mobil + 41 76 717 5220
E-Mail: dirk.richter at upd.unibe.ch<mailto:dirk.richter at upd.unibe.ch>
https://www.upd.ch/forschung/psychiatrische-rehabilitation/

University of Bern Psychiatric Services
Center for Psychiatric Rehabilitation
Dirk Richter, Dr. phil., PhD