[R-meta] Bivariate generalized linear mixed model with {metafor}

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Wed Mar 9 23:46:55 CET 2022


Different parameterizations of the same model.

Also, the correlations seems like they just flipped signs, but they are really different things and I suspect it's just coincidence that they happen to be so close in absolute value. 

With (group | study), you have a random intercept (for the control group logit risk) and a random slope for the group/treatment effect (for the log odds ratio).

With (control+treat-1|study), you have random effects for the control and treatment group logit risks. This is the same as (0 + group | study).

So really different things are being correlated here. But in the end, it's the same model, parameterized in different ways.

Best,
Wolfgang

>-----Original Message-----
>From: Arthur Albuquerque [mailto:arthurcsirio using gmail.com]
>Sent: Wednesday, 09 March, 2022 23:35
>To: r-sig-meta-analysis using r-project.org; Viechtbauer, Wolfgang (SP)
>Subject: RE: [R-meta] Bivariate generalized linear mixed model with {metafor}
>
>Very enlightening code, thank you!!
>
>I noted that you used a different lme4 syntax for this model in your article with
>Jackson et al with a “control” indicator and removed the intercept within the
>random effect.
>
>I compared both syntax outputs. Results are all the same, except the random
>effect correlation parameter (opposite results). Do you know why?
>
>library(metafor)
>library(lme4)
>
>dat <- structure(list(study.name = c("Cutler1995", "Dahlof1991", "Dowson2002",
>"Ensink1991", "Geraud2000", "Goadsby1991", "Goadsby2000", "Havanka2000",
>"Kaniecki2006", "Mathew2003", "Myllyla1998", "Nappi1994", "Patten1991",
>"Pfaffenrath1998", "Sandrini2002", "Sargent1995", "Sheftell2005", "Tfelt-
>Hansen1995", "Tfelt-Hansen1998", "Visser1996"), n1 = c(66L, 275L, 194L, 131L,
>504L, 89L, 129L, 98L, 131L, 831L, 42L, 142L, 142L, 277L, 170L, 46L, 902L, 122L,
>388L, 72L), n2 = c(65L, 182L, 99L, 78L, 56L, 93L, 142L, 91L, 127L, 419L, 41L,
>81L, 101L, 91L, 84L, 47L, 892L, 126L, 160L, 85L), r1 = c(37L, 180L, 123L, 60L,
>304L, 45L, 63L, 59L, 64L, 471L, 33L, 73L, 95L, 175L, 85L, 26L, 649L, 63L, 239L,
>33L), r2 = c(17L, 48L, 42L, 14L, 24L, 9L, 30L, 28L, 47L, 105L, 12L, 25L, 22L,
>27L, 25L, 8L, 375L, 30L, 64L, 15L)), row.names = c(NA, 20L), class =
>"data.frame")
>
>dat <- to.long(measure="OR", ai=r1, n1i=n1, ci=r2, n2i=n2, data=dat)
>
>res <- glmer(cbind(out1,out2) ~ group + (group | study), data=dat,
>family=binomial)
>summary(res)
>
>VarCorr(res)
># Corr = -0.68
>
>dat_mod = dat
>dat_mod$treat = ifelse(dat$group==1, 1, 0)
>dat_mod$control = ifelse(dat$group==2, 1, 0)
>
>model6 = glmer(cbind(out1,out2)~factor(treat)+(control+treat-1|study),
> data=dat_mod, family=binomial(link="logit"))
>
>summary(model6)
>
>VarCorr(model6)
># Corr = 0.686
>
>Best,
>
>Arthur M. Albuquerque
>
>Medical student
>Universidade Federal do Rio de Janeiro, Brazil
>
>On Mar 9, 2022, 6:54 PM -0300, Viechtbauer, Wolfgang (SP)
><wolfgang.viechtbauer using maastrichtuniversity.nl>, wrote:
>
>Ok, so here is what I came up with:
>
>library(metafor)
>library(lme4)
>
>dat <- structure(list(study.name = c("Cutler1995", "Dahlof1991", "Dowson2002",
>"Ensink1991", "Geraud2000", "Goadsby1991", "Goadsby2000", "Havanka2000",
>"Kaniecki2006", "Mathew2003", "Myllyla1998", "Nappi1994", "Patten1991",
>"Pfaffenrath1998", "Sandrini2002", "Sargent1995", "Sheftell2005", "Tfelt-
>Hansen1995", "Tfelt-Hansen1998", "Visser1996"), n1 = c(66L, 275L, 194L, 131L,
>504L, 89L, 129L, 98L, 131L, 831L, 42L, 142L, 142L, 277L, 170L, 46L, 902L, 122L,
>388L, 72L), n2 = c(65L, 182L, 99L, 78L, 56L, 93L, 142L, 91L, 127L, 419L, 41L,
>81L, 101L, 91L, 84L, 47L, 892L, 126L, 160L, 85L), r1 = c(37L, 180L, 123L, 60L,
>304L, 45L, 63L, 59L, 64L, 471L, 33L, 73L, 95L, 175L, 85L, 26L, 649L, 63L, 239L,
>33L), r2 = c(17L, 48L, 42L, 14L, 24L, 9L, 30L, 28L, 47L, 105L, 12L, 25L, 22L,
>27L, 25L, 8L, 375L, 30L, 64L, 15L)), row.names = c(NA, 20L), class =
>"data.frame")
>
># double-check that we can reproduce OR = 3.50 (95% CI, 2.94 to 4.16) for
># the two-stage random-effects model
>
>res <- rma(measure="OR", ai=r1, n1i=n1, ci=r2, n2i=n2, data=dat, method="DL")
>predict(res, transf=exp, digits=2)
>
># the model being fitted is essentially this one
>
>res <- rma.glmm(measure="OR", ai=r1, n1i=n1, ci=r2, n2i=n2, data=dat,
>model="UM.RS", cor=TRUE)
>summary(res)
>predict(res, transf=exp, digits=2)
>
># we can also fit this model directly with glmer()
>
>dat <- to.long(measure="OR", ai=r1, n1i=n1, ci=r2, n2i=n2, data=dat)
>
>res <- glmer(cbind(out1,out2) ~ group + (group | study), data=dat,
>family=binomial)
>summary(res)
>round(exp(c(fixef(res)[2], confint(res, parm="group1", method="Wald"))), 2)
>
># note the exact same pooled OR and 95% CI
>
># however, the authors report the marginal OR, so for this we fit the model as
>
>res <- glmer(cbind(out1,out2) ~ 0 + group + (0 + group | study), data=dat,
>family=binomial)
>summary(res)
>
># and then we can estimate the marginal OR as described in B.2
>
>cval <- 16*sqrt(3) / (15*pi)
>p1hat <- 1 / (1 + exp(-fixef(res)[1] / sqrt(1 + cval^2 *
>VarCorr(res)[[1]][1,1])))
>p2hat <- 1 / (1 + exp(-fixef(res)[2] / sqrt(1 + cval^2 *
>VarCorr(res)[[1]][2,2])))
>round((p2hat / (1 - p2hat)) / (p1hat / (1 - p1hat)), 2)
>
># there is the 3.48 that the authors report; getting the CI for this marginal OR
># requires a bit more work using the delta method
>
>So, to summarize: The model they describe *is* 'Model 6'. You can also fit this
>with rma.glmm() as shown above (with the 'cor=TRUE' option, which I hadn't
>implemented yet when we wrote that 2018 paper). I don't know why they focus on
>the marginal OR, but I didn't read the article fully.
>
>Best,
>Wolfgang
>
>
>-----Original Message-----
>From: Arthur Albuquerque [mailto:arthurcsirio using gmail.com]
>Sent: Wednesday, 09 March, 2022 20:37
>To: r-sig-meta-analysis using r-project.org; Viechtbauer, Wolfgang (SP)
>Subject: RE: [R-meta] Bivariate generalized linear mixed model with {metafor}
>
>Dear Wolfgang,
>
>Thank you in advance. I contacted Reference [1] authors, and they sent me this
>data:
>
>structure(list(study.name = c("Cutler1995", "Dahlof1991", "Dowson2002",
>"Ensink1991", "Geraud2000", "Goadsby1991", "Goadsby2000", "Havanka2000",
>"Kaniecki2006", "Mathew2003", "Myllyla1998", "Nappi1994", "Patten1991",
>"Pfaffenrath1998", "Sandrini2002", "Sargent1995", "Sheftell2005", "Tfelt-
>Hansen1995", "Tfelt-Hansen1998", "Visser1996"), n1 = c(66L, 275L, 194L, 131L,
>504L, 89L, 129L, 98L, 131L, 831L, 42L, 142L, 142L, 277L, 170L, 46L, 902L, 122L,
>388L, 72L), n2 = c(65L, 182L, 99L, 78L, 56L, 93L, 142L, 91L, 127L, 419L, 41L,
>81L, 101L, 91L, 84L, 47L, 892L, 126L, 160L, 85L), r1 = c(37L, 180L, 123L, 60L,
>304L, 45L, 63L, 59L, 64L, 471L, 33L, 73L, 95L, 175L, 85L, 26L, 649L, 63L, 239L,
>33L), r2 = c(17L, 48L, 42L, 14L, 24L, 9L, 30L, 28L, 47L, 105L, 12L, 25L, 22L,
>27L, 25L, 8L, 375L, 30L, 64L, 15L)), row.names = c(NA, 20L), class =
>"data.frame")
>
>Best,
>
>Arthur M. Albuquerque
>
>Medical student
>Universidade Federal do Rio de Janeiro, Brazil
>
>On Mar 8, 2022, 7:07 PM -0300, Viechtbauer, Wolfgang (SP)
><wolfgang.viechtbauer using maastrichtuniversity.nl>, wrote:
>
>Hi Arthur,
>
>I don't want to start re-reading all those articles right now, but who cites who
>isn't the best indication of whether people are talking about the same model or
>not.
>
>In [1], the authors use data from a Cochrane review to illustrate the model. If
>you provide these data here using dput(), so that I can immediately reproduce the
>exact dataset, then I could see whether I can reproduce their results. It's time-
>consuming extracting data manually from pdfs, so I'll leave this up to you
>whether you want to do this.
>
>Best,
>Wolfgang
>
>-----Original Message-----
>From: Arthur Albuquerque [mailto:arthurcsirio using gmail.com]
>Sent: Tuesday, 08 March, 2022 2:07
>To: r-sig-meta-analysis using r-project.org; Viechtbauer, Wolfgang (SP)
>Subject: RE: [R-meta] Bivariate generalized linear mixed model with {metafor}
>
>Hi Wolfgang,
>
>It’s me again about this bivariate model. I am having a hard time trying to
>figure out if I understood it correctly.
>
>To recap, I wanted to fit a bivariate meta-analysis model (hereafter, mod1)
>described in Reference [1] below. You replied suggesting it was the "Model 6: the
>Van Houwelingen bivariate” (mod2) in your article with Jackson et al (Reference
>[2]).
>
>However, I am now re-reading all these articles and I believe mod1 and mod2 are
>different. Reference [1] cites Thompson et al. (Reference [3]), and does not cite
>van Houwelingen. You cited Van Houwelingen et al (Reference [4]). To my
>knowledge, they seem different models indeed.
>
>In fact, Van Houwelingen in Reference [5] directly cites Thompson suggesting
>these models are distinct:
>
>"The mix of many 􏱪fixed and a few random e􏱨ffects as proposed by Thompson et al.
>… are more in the spirit of the functional approach. These methods are meant to
>impose no conditions on the distribution of the true baseline risks… In a letter
>to the editor by Van Houwelingen and Senn following the article of Thompson et
>al. , Van Houwelingen and Senn argue that putting Bayesian priors on all nuisance
>parameters, as done by Thompson et al., does not help solving the inconsistency
>problem."
>
>Are they indeed different model?
>
>Please ignore this email if this question is out of the scope of your mailing
>list. Sorry in advance.
>
>Kind regards,
>
>Arthur M. Albuquerque
>
>Medical student
>Universidade Federal do Rio de Janeiro, Brazil
>
>References:
>
>[1] Xiao, Mengli, Yong Chen, Stephen R Cole, Richard F MacLehose, David B
>Richardson, and Haitao Chu. ‘Controversy and Debate: Questionable Utility of the
>Relative Risk in Clinical Research: Paper 2: Is the Odds Ratio “Portable” in
>Meta-Analysis? Time to Consider Bivariate Generalized Linear Mixed Model’.
>Journal of Clinical Epidemiology 142 (February 2022): 280–87.
>https://doi.org/10.1016/j.jclinepi.2021.08.004
>
>[2] Jackson, Dan, Martin Law, Theo Stijnen, Wolfgang Viechtbauer, and Ian R.
>White. ‘A Comparison of Seven Random-Effects Models for Meta-Analyses That
>Estimate the Summary Odds Ratio’. Statistics in Medicine 37, no. 7 (30 March
>2018): 1059–85. https://doi.org/10.1002/sim.7588
>
>[3] Thompson, Simon G., Teresa C. Smith, and Stephen J. Sharp. ‘Investigating
>Underlying Risk as a Source of Heterogeneity in Meta-Analysis’. Statistics in
>Medicine 16, no. 23 (15 December 1997): 2741–58.
>https://doi.org/10.1002/(SICI)1097-0258(19971215)16:23<2741::AID-SIM703>3.0.CO;2-
>0
>
>[4] Van Houwelingen, Hans C., Koos H. Zwinderman, and Theo Stijnen. ‘A Bivariate
>Approach to Meta-Analysis’. Statistics in Medicine 12, no. 24 (30 December 1993):
>2273–84. https://doi.org/10.1002/sim.4780122405
>
>[5] Houwelingen, Hans C. van, Lidia R. Arends, and Theo Stijnen. ‘Advanced
>Methods in Meta-Analysis: Multivariate Approach and Meta-Regression’. Statistics
>in Medicine 21, no. 4 (28 February 2002): 589–624.
>https://doi.org/10.1002/sim.1040


More information about the R-sig-meta-analysis mailing list