[R-meta] Implementation of the Inverse variance heterogeneity model

Suhail Doi @doi @ending from gmx@com
Thu Aug 23 10:06:09 CEST 2018


Dear Dirk,

  I came accross this communication by chance when doing an internet 
search for something else, but nevertheless thought it prudent to 
respond albeit after some time has passed.

  I would disagree with the comments made by Wolfgang and I believe the 
history is also not that accurate.

  First, the IVhet model is not a random-effects model with 1/vi 
weights. It is a fixed effect model with robust error variance. 
Unfortunately statisticians are unable to easily get away from the 
concept of normally distributed random effects and the IVhet model is 
based on the fact that there is an underlying parameter from which all 
study effects emerge with some forms of error (which I will not discuss 
here for brevity).

  Second, the IVhet model was first described as the quality effects 
model in 2008 (well before Henmi and Copas). Indeed, the IVhet model IS 
the quality effects model with one constraint – all quality is set to 
equal so no quality input is required. To check this run the QE model in 
metaXL and enter quality (any acceptable value) against each study that 
is the same for all studies and you will get the IVhet result. The 
reason IVhet was created was that we realized in the fifth year after 
the QE release that biostatisticians were simply unable to envisage 
quality in the way we do and this was a way to bring it back without 
mentioning quality – this has worked very well as though we lose bias 
adjustment, we still have a much better estimator than the RE estimator 
in terms of MSE and coverage and is much more utilised than the QE 
estimator in research.

  Third, Henmi and Copas propose a CI that is not really optimal since 
it is wider than the IVhet/QE intervals and the latter are known to have 
at least nominal coverage

  Fourth, given that this is a fixed effect model, the use of tau 
squared is as an overdispersion correction and thus ONLY the DL tau 
squared defines the IVhet and QE models. Many biostatisticians have 
tried to replace the DL tau squared with REML and other variants – these 
are no longer IVhet or QE models because the conceptualization of the 
IVhet model depends on the fact that tau squared be used as an 
overdispersion correction and thus must be generated via method of 
moments. While the variance formulation provided by Wolfgang in the 
slides works, it will fail as soon as a different tau squared (more 
accurate in the biostatistics parlance) is inserted   The fact that this 
formulation has been put forward now is in my view an ex post facto 
justification from biostatistics for missing this easily conceptualized 
estimator in the first place and despite the fact that it is far 
superior to RE estimators, nothing much has changed.

   Finally, a recent paper by Rice et al has added more confusion to 
this area by distinguishing fixed effect (singular) and fixed effects 
(plural) models. My view is that there are neither fixed effects 
(plural) nor random effects models and these are just attempts by 
biostatistics to fit models based on their worldview, the key element of 
this worldview being the existence of normally distributed random 
effects. Such models have survived simulation testing because most 
researchers tend to simulate the way they will eventually analyse thus 
creating a self-fulfilling prophecy.

  Apologies if my comments seem a bit harsh but I was a clinician 
working in the hospital for 20 years and an ardent user of research 
syntheses. I took up clinical epidemiology when I realized that there 
was a serious problem with research synthesis that needed to be fixed 
from outside of mainstream biostatistics and after discussions with 
eminent biostatisticians failed to generate much change..

  Best

  Suhail


  Suhail A. R. Doi
  Professor of Clinical Epidemiology (Hon),
  Research School of Population Health
  ANU College of Health and Medicine,
  62 Mills Rd
  The Australian National University
  Acton ACT 2601
  E: Suhail.Doi using anu.edu.au

  CRICOS Provider # 00120C


  _______________________________________________________________
 >
 > As far as I am concerned, discussions around the pros and cons of 
various methods are perfectly fine, esp. if they are directly linked to 
implementations in R.
 >
 > So, we are considering two methods:
 >
 > 1) A random-effects model with the standard 1/(vi + tau^2) weights 
(where vi is the sampling variance of the ith study and tau^2 the 
(estimated) amount of variance/heterogeneity in the true outcomes)
 >
 > versus
 >
 > 2) A random-effects model with 1/vi weights.
 >
 > Under the assumptions of the RE model and in the absence of 
publication bias, both approaches provide an unbiased estimate of the 
average true outcome. Approach 1 is more efficient; in fact, using 1/(vi 
+ tau^2) weights gives us the uniformly minimum variance unbiased 
estimator (UMVUE).
 >
 > Sidenote: To be precise, that is only true if tau^2 would be a known 
quantity and not estimated (and similarly, the sampling variances must 
be known quantities). So, really, we are only getting an approximation 
to the UMVUE. The larger k (number of studies) is, the more appropriate 
it is to treat tau^2 as a known quantity. The larger the within-study 
sample sizes are, the more appropriate it is to treat the sampling 
variances as known quantities (but what 'large' means here depends a lot 
on the outcome measure used; for measures based on a 
variance-stabilizing transformation, even rather small within-study 
sample sizes will do).
 >
 > Things become complicated when there is publication bias, that it, 
when the probability of including a study in our meta-analysis is tied 
to the statistical significance of the finding/outcome. In that case, 
large studies (with very small sampling variances) will provide less 
biased estimates than small studies (with very large sampling 
variances). Now if tau^2 is very large, then tau^2 dominates the 1/(vi + 
tau^2) weights, so that all studies get almost the same weight, and 
hence also the very small studies that are so biased. As a result, the 
estimate of the average true outcome will also be badly biased. If, 
instead, we use 1/vi weights, then the very small studies are 
downweighted a lot and don't screw up our estimate as much.
 >
 > That is in essence what Henmi and Copas (2010) have shown. They also 
derived a method to compute the SE/CI for approach 2, which is 
implemented in the hc() function in metafor. To go back to the earlier 
example:
 >
 > library(metafor)
 >
 > dat <- get(data(dat.li2007))
 > dat <- dat[order(dat$study),]
 > rownames(dat) <- 1:nrow(dat)
 > dat <- escalc(measure="OR", ai=ai, n1i=n1i, ci=ci, n2i=n2i, data=dat, 
subset=-c(19,21,22))
 >
 > ### standard RE model
 > res <- rma(yi, vi, data=dat, method="DL")
 > predict(res, transf=exp, digits=2)
 >
 > ### Henmi & Copas (2010) method
 > hc(res, transf=exp, digits=2)
 >
 > ### RE model with 1/vi weights ("IVhet")
 > res <- rma(yi, vi, data=dat, method="DL", weights=1/vi)
 > predict(res, transf=exp, digits=2)
 >
 > Interestingly, the H&C method gives a MUCH wider CI here. Usually 
though, the difference between the H&C method and the "rma(yi, vi, 
data=dat, method="DL", weights=1/vi)" approach ("IVhet") is rather small.
 >
 > I gave a talk at the 2016 meeting of the Society for Research 
Synthesis Methodology about this topic. The slides are here in case you 
are interested:
 >
 > 
http://www.wvbauer.com/lib/exe/fetch.php/talks:2016_viechtbauer_srsm_weights.pdf
 >
 > In the simulation, I also compared the H&C method with the "IVhet" 
approach (not shown in the slides) and found that the H&C approach did 
just a tad better, but not by much. A disadvantage of the H&C approach 
is that it doesn't generalize to models with moderators (meta-regression).
 >
 > Best,
 > Wolfgang
 >
 > -----Original Message-----
 > From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at 
r-project.org] On Behalf Of dirk.richter at upd.unibe.ch
 > Sent: Wednesday, 29 November, 2017 22:29
 > To: r-sig-meta-analysis at r-project.org
 > Subject: Re: [R-meta] Implementation of the Inverse variance 
heterogeneity model
 >
 > Dear Wolfgang, dear James,
 > many thanks to both of you for replying so quickly and for providing 
some valuable history lessons to an R-meta newbie.
 > My next question may be a bit off topic as it is more general on the 
pros and cons of these different approaches (am I allowed to put it 
here??). At first sight, the inverse sampling variance approach and 
those you have mentioned have an appeal to me, especially when comparing 
them to conventional RE and its use of rather similar weights of larger 
and smaller samples. The newbie that I am would like to have some 
guidance on these issues or at least an in-depth discussion paper. Does 
anybody have a recommendation?
 > Thanks,
 > Dirk
 >
 > Von: James Pustejovsky [mailto:jepusto at gmail.com]
 > Gesendet: Mittwoch, 29. November 2017 21:02
 > An: Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer at 
maastrichtuniversity.nl>
 > Cc: Richter, Dirk (UPD) <dirk.richter at upd.unibe.ch>; 
r-sig-meta-analysis at r-project.org
 > Betreff: Re: [R-meta] Implementation of the Inverse variance 
heterogeneity model
 >
 > I was just typing up an email saying the same thing (and using the 
same example), but Wolfgang beat me to the punch! So count it as 
independently replicated. I would add two things:
 >
 > 1. An alternative to the IVhet method is to use the FE model with 
robust variance estimation (Sidik & Jonkman, 2006) to account for 
between-study heterogeneity when estimating standard errors. This can be 
done with the clubSandwich package (though you'll have to do the scale 
transformation as a post-processing step):
 >
 > ### standard FE model
 > res <- rma(yi, vi, data=dat, method="FE")
 > library(clubSandwich)
 > coef_test(res, vcov = "CR2", cluster = dat$id)
 >
 > In this example, the robust standard error is *substantially* smaller 
than the IVhet standard error. It also has very low degrees of freedom 
because of the very unequal weighting of the studies.
 >
 > 2. In the conventional random effects model, the Knapp-Hartung method 
is often recommended for testing the average treatment effect:
 >
 > ### standard RE model with Knapp-Hartung
 > res <- rma(yi, vi, data=dat, method="DL", test = "knha")
 > predict(res, transf=exp, digits=2)
 >
 > I don't know if there is research into the relative performance of 
Knapp-Hartung with inverse-sampling variance weights (anybody know of 
work on this?), but on the face of it, it seems reasonable to generalize 
based on its performance under conventional RE models:
 >
 > ### RE model with 1/vi weights ("IVhet")
 > res <- rma(yi, vi, data=dat, method="DL", weights=1/vi, test = "knha")
 > predict(res, transf=exp, digits=2)
 >
 > James
 >
 > On Wed, Nov 29, 2017 at 1:42 PM, Viechtbauer Wolfgang (SP) 
<wolfgang.viechtbauer at 
maastrichtuniversity.nl<mailto:wolfgang.viechtbauer at 
maastrichtuniversity.nl>> wrote:
 > Dear Dirk,
 >
 > What Doi et al. describe are RE models with different weights than 
the default ones.
 >
 > "AMhet" uses unit weights. The possibility to fit this model was 
implemented in metafor since its first release in 2009. "IVhet" uses 
inverse sampling variance weights. The possibility to fit this model was 
implemented in version 1.9-3 in 2014.
 >
 > Using the example from Doi et al. (2015):
 >
 > ##############################
 >
 > library(metafor)
 >
 > dat <- get(data(dat.li2007))
 > dat <- dat[order(dat$study),]
 > rownames(dat) <- 1:nrow(dat)
 > dat <- escalc(measure="OR", ai=ai, n1i=n1i, ci=ci, n2i=n2i, data=dat, 
subset=-c(19,21,22))
 >
 > ### standard RE model
 > res <- rma(yi, vi, data=dat, method="DL")
 > predict(res, transf=exp, digits=2)
 >
 > ### RE model with 1/vi weights ("IVhet")
 > res <- rma(yi, vi, data=dat, method="DL", weights=1/vi)
 > predict(res, transf=exp, digits=2)
 >
 > ### RE model with unit weights ("AMhet")
 > res <- rma(yi, vi, data=dat, method="DL", weights=1)
 > predict(res, transf=exp, digits=2)
 >
 > ##############################
 >
 > The results are exactly those reported on 135: "When the 
meta-analytic estimates were computed using the three methods, they were 
most extreme with the AMhet estimator (OR 0.44; 95% CI 0.29-0.66), less 
extreme with the RE estimator (OR 0.71; 95% CI 0.57-0.89) and most 
conservative with the IVhet estimator (OR 1.01; 95% CI 0.71-1.46)."
 >
 > The idea to fit a RE model with inverse sampling variance weights was 
actually already described in:
 >
 > Henmi, M., & Copas, J. B. (2010). Confidence intervals for random 
effects meta-analysis and robustness to publication bias. Statistics in 
Medicine, 29(29), 2969-2983.
 >
 > Best,
 > Wolfgang
 >
 > -----Original Message-----
 > From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at 
r-project.org<mailto:r-sig-meta-analysis-bounces at r-project.org>] On 
Behalf Of dirk.richter at upd.unibe.ch<mailto:dirk.richter at upd.unibe.ch>
 > Sent: Wednesday, 29 November, 2017 17:22
 > To: r-sig-meta-analysis at r-project.org<mailto:r-sig-meta-analysis 
at r-project.org>
 > Subject: [R-meta] Implementation of the Inverse variance 
heterogeneity model
 >
 > Dear R meta-analysis group,
 >
 > I was wondering whether there are any plans to implement the Inverse 
variance heterogeneity model (by Doi et al., see reference below) into R 
MA packages or whether this has been done recently (although I couldn't 
find anything on the Web). While the authors of this model have provided 
with MetaXL a free software that allows to run such an analysis, I would 
be happy to have it connected to or implemented into R to have the 
chance to run meta-regressions based on this approach. Currently, there 
is a only a connection to Stata for meta-regressions.
 >
 > Reference
 >
 > SA Doi et al. Advances in the meta-analysis of heterogeneous clinical 
trials I: The inverse variance heterogeneity model. Contemp Clin Trials. 
2015 Nov;45(Pt A):130-8. doi: 10.1016/j.cct.2015.05.009
 >
 > Thanks,
 > Dirk Richter
 >
 > UNIVERSITÄRE PSYCHIATRISCHE DIENSTE BERN (UPD) AG
 > DIREKTION PSYCHIATRISCHE REHABILITATION
 >
 > Dirk Richter, Dr. phil. habil.
 > Leiter Forschung und Entwicklung
 > Murtenstrasse 46
 > CH-3008 Bern
 > Tel. +41 31 632 4707
 > Mobil + 41 76 717 5220
 > E-Mail: dirk.richter at upd.unibe.ch<mailto:dirk.richter at upd.unibe.ch>
 > https://www.upd.ch/forschung/psychiatrische-rehabilitation/
 >
 > University of Bern Psychiatric Services
 > Center for Psychiatric Rehabilitation
 > Dirk Richter, Dr. phil., PhD
 > Head of Research and Development
 > Murtenstrasse 46
 > CH-3008 Bern
 > Switzerland
 > Phone +41 31 632 4707<tel:%2B41%2031%20632%204707>
 > Mobile Phone +41 76 717 5220<tel:%2B41%2



More information about the R-sig-meta-analysis mailing list