[R-meta] Borrowing of strength in bivariate meta-analysis

Aulbach, Matthias B m@tthi@@@@ulb@ch @ending from hel@inki@fi
Mon Nov 5 09:46:34 CET 2018


Dear James,

thank you very much for your input and the suggestions, they are much appreciated and have helped my thinking around this. In a way, it is nice to see that this case is not so clear even to an expert like you ;)

If anyone else still has interpretations and/or suggestions I will be happy to hear about them.

Best,
Matthias

From: James Pustejovsky <jepusto using gmail.com>
Sent: perjantai 2. marraskuuta 2018 4.16
To: Aulbach, Matthias B <matthias.aulbach using helsinki.fi>
Cc: Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl>; r-sig-meta-analysis using r-project.org
Subject: Re: Borrowing of strength in bivariate meta-analysis

Matthias,

This is a very interesting situation, to find that the results of the bi-variate meta-analysis differ to such an extent from the separate uni-variate estimates. It's also a bit of a dilemma: on the one hand, the bi-variate results might be driven by an unstable estimate of the correlation between effects for behavioral and evaluation outcomes. On the other hand, there may be some sort of selective outcome reporting at work here, in which the bi-variate model could be a useful way to reduce bias.

I would suggest a few diagnostics that could help us to understand what is going on:

1. With a uni-variate model, examine differences in the average ES for behavioral outcomes in studies that do versus do not have an evaluation outcome.
2. Similarly, examine differences between the average ES for evaluation outcomes in studies that do versus do not report a behavioral outcome. (Of course, only do this if there are studies that report evaluation outcomes but not behavioral outcomes.)
3. For studies that report both outcomes, create a scatter plot of behavioral versus evaluation outcomes. If you want to get fancy, let the size of the point in the scatter plot correspond to the size of the study (or some measure of precision of the effects).
4. Examine contour-enhanced funnel plots for each outcome, considering especially whether there may be under-reporting of non-significant evaluation outcomes.
5. Consider context: is one or the other of these outcomes typically treated as primary, while the other is a secondary outcome that might be reported less consistently?

I would be eager to hear others' suggestions and interpretations too.

Cheers,
James

On Thu, Nov 1, 2018 at 9:14 AM Aulbach, Matthias B <matthias.aulbach using helsinki.fi<mailto:matthias.aulbach using helsinki.fi>> wrote:
Dear mailing list,

I have a question regarding the phenomenon ”borrowing of strength” in bivariate meta-analysis (as described, e.g. in Riley, 2009).
So I first performed two univariate meta-analyses with 1. a behavioral outcome (k = 47, g = .17) and 2. a measure of evaluation (k = 24, g = .18). To compare the results, I ran a bivariate meta-analysis (guestimating different within-study correlation coefficients) and got g = .17 for the behavioral outcome and g = .37 for the evaluation outcome (the values differed only slightly when using different within-study correlation coefficients). See the below e-mails for more info on this meta-analysis.

Now my question is: how trustworthy is the larger estimated effect size from the second outcome (g = .37)? Is this what I should base my interpretation on rather than the results from the univariate analyses?

Any piece of advice is much appreciated!

Best,
Matthias



From: James Pustejovsky <jepusto using gmail.com<mailto:jepusto using gmail.com>>
Sent: keskiviikko 16. toukokuuta 2018 21.21
To: Aulbach, Matthias B <matthias.aulbach using helsinki.fi<mailto:matthias.aulbach using helsinki.fi>>
Cc: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl<mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>>; r-sig-meta-analysis using r-project.org<mailto:r-sig-meta-analysis using r-project.org>
Subject: Re: [R-meta] Between study correlation between two different outcomes

The correlation between random effects is a notoriously difficult thing to estimate well. Here are two citations that discuss the point:

* Riley, R. D., Abrams, K. R., Sutton, A. J., Lambert, P. C., & Thompson, J. R. (2007). Bivariate random-effects meta-analysis and the estimation of between-study correlation. BMC Medical Research Methodology, 7(3), 1–15. https://doi.org/10.1186/1471-2288-7-3

* Chen, Y., Hong, C., & Riley, R. D. (2014). An alternative pseudolikelihood method for multivariate random-effects meta-analysis. Statistics in Medicine, (October), n/a-n/a. https://doi.org/10.1002/sim.6350

Two things you could do to get a better sense of how well rho can be estimated:

1. Plot the profile log-likelihood for rho (example code below). The REML estimate of rho is based on maximizing the restricted log likelihood. If the profile likelihood is quite flat, then there's just not much information available in the data to estimate this correlation.

library(metafor)
dat <- get(data(dat.berkey1998))
cor <- 0.7
V <- bldiag(lapply(split(dat[,c("v1i", "v2i")], dat$trial), function(vi) {vi[1,2] <- vi[2,1] <- sqrt(vi[1,1] * vi[2,2]) * cor; as.matrix(vi)}))
### multiple outcomes random-effects model
res <- rma.mv<http://rma.mv>(yi, V, mods = ~ outcome - 1, random = ~ outcome | trial, struct="UN", data=dat)
profile(res, rho = 1)

2. Calculate the number of studies in your meta-analysis that include estimates of both outcomes. If it's small, then rho will not be well estimated.

There are a few potential remedies:
- Get more data that includes effect size estimates for both outcomes.
- Sophia Rabe-Hasketh and some of her colleagues have proposed using penalized log likelihood to estimate variance component in multi-level models, using priors that keep the random effect variance structure from degenerating. The blme R package will fit those models easily, although I'm not sure if it can do meta-analysis models.
- If you can develop a reasonable prior for rho you could go full Bayes. The brms package is one good tool for fitting such models.


On Wed, May 16, 2018 at 4:29 AM, Aulbach, Matthias B <matthias.aulbach using helsinki.fi<mailto:matthias.aulbach using helsinki.fi>> wrote:
Hello once again,

thanks to Wolfgang's most valuable replies, I have run the analyses as suggested. The problem occurring now is that the estimates for rho don't seem to make a lot of sense (point estimates of 1 and/or huge confidence intervals). I found this post from James E. Pustejovsky about it in the archives of this mailing list [http://jepusto.github.io/imputing-covariance-matrices-for-multi-variate-meta-analysis] where I read this:

"The metafor fit is also a bit goofy because the correlation between the random effects for math and verbal scores is very close to -1, although evidently it is not uncommon to obtain such degenerate estimates of the random effects structure."

How is it that the estimation of rho is "goofy" and could someone point to any other examples on the literature to underline that "is not uncommon"? Does that imply that the method doesn't work properly here? Or did I probably do something wrong?

To clarify: What I want to know is to what degree studies that find larger effects in outcome 1 also find larger effects in outcome 2.  In my initial solution to this problem, I had conducted univariate analyses and then ran a meta-regression with the outcome 1 as dependent variable and outcome 2 as a moderator. This was criticized by reviewers which is why I now conducted the multivariate analysis with both outcomes, trying to determine rho.

Thanks a lot in advance (once more)!

Best,
Matthias


-----Original Message-----
From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org<mailto:r-sig-meta-analysis-bounces using r-project.org>> On Behalf Of Aulbach, Matthias B
Sent: torstai 10. toukokuuta 2018 9.16
To: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl<mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>>
Cc: r-sig-meta-analysis using r-project.org<mailto:r-sig-meta-analysis using r-project.org>
Subject: Re: [R-meta] Between study correlation between two different outcomes

Dear Wolfgang,

thank you again for your really detailed answer. This is helping me a lot!
I will definitely check the mailing list archives.

All the best,
Matthias

-----Original Message-----
From: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl<mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>>
Sent: keskiviikko 9. toukokuuta 2018 11.46
To: Aulbach, Matthias B <matthias.aulbach using helsinki.fi<mailto:matthias.aulbach using helsinki.fi>>
Cc: r-sig-meta-analysis using r-project.org<mailto:r-sig-meta-analysis using r-project.org>
Subject: RE: Between study correlation between two different outcomes

Dear Matthias,

Please keep the list in CC when replying.

V should have the sampling *variances* along the diagonal. And yes, the off-diagonal values will be the covariances. Since you have two values per study, V will therefore be block-diagonal with 2x2 blocks along the diagonal. So your case is basically the same as this one:

http://www.metafor-project.org/doku.php/analyses:berkey1998

The issue of missing information about the covariances has been discussed on this mailing list quite extensively, so it would be good to browse through the archives.

One solution you will find mentioned there is ignoring the covariances (i.e., assuming that they are 0) and then using cluster-robust inference methods. This approach should be ok when interest is only on the fixed effects of the model. In your case, this approach is *NOT* appropriate, since you are specifically interested in the correlation between the random effects. If you assume that the covariances between the sampling errors is 0, then this will (usually) drive up the covariance between the random effects, which is going to lead to an overestimate of the correlation between the random effects. The cluster-robust approach fixes up the standard errors of the fixed effects, but won't do anything to correct for the bias in the estimated correlation between the random effects.

So, if you do not have any information to compute the covariances, you will have to make an educated guess and then do a sensitivity analysis.

Using the Berkey et al. (1998) data as an example, here is an illustration of what this might look like:

library(metafor)

### load data
dat <- get(data(dat.berkey1998))

cors <- seq(0, .99, length=100)
rhos <- rep(NA, length(cors))

for (j in 1:length(cors)) {

   print(j)

   ### construct V matrix with guestimated correlation
   V <- bldiag(lapply(split(dat[,c("v1i", "v2i")], dat$trial), function(vi) {vi[1,2] <- vi[2,1] <- sqrt(vi[1,1] * vi[2,2]) * cors[j]; as.matrix(vi)}))

   ### multiple outcomes random-effects model
   res <- rma.mv<http://rma.mv>(yi, V, mods = ~ outcome - 1, random = ~ outcome | trial, struct="UN", data=dat)
   rhos[j] <- res$rho

}

plot(cors, rhos, type="o", pch=19)

So what I am doing here is constructing the V matrix based on the sampling variances and different assumed values for the correlation between the sampling errors (cors). Then I fit the model and save the correlation between the random effects (rhos). As you can see in the plot, as I assume a higher value for cor, the value for rho goes down. Assuming a correlation of 0 leads to an estimate of rho equal to 0.78. On the other hand, assuming a correlation of 0.9 leads to an estiamte of rho equal to 0.36. That's quite different.

I don't know to what extent rho will depend on the assumed correlation for your data. Also, I used a very wide range for 'cors' just for illustration purposes. In practice, you should be able to narrow down the range a bit more.

Best,
Wolfgang

-----Original Message-----
From: Aulbach, Matthias B [mailto:matthias.aulbach using helsinki.fi<mailto:matthias.aulbach using helsinki.fi>]
Sent: Wednesday, 09 May, 2018 10:07
To: Viechtbauer, Wolfgang (SP)
Subject: RE: Between study correlation between two different outcomes

Dear Wolfgang,

thank you very much for your fast and very helpful answer!

About the data structure: basically everything you assumed is correct. I have two rows for each study ("ID") ,first row outcome 1, second row outcome 2. "type_outcome" is a binary variable defining which kind of outcome is entered in which line.

And yes, the measurements are from the same subjects. The problem is that I don't have data about within-study correlations (or covariance), so I will have to make a guess about how large that is (or try out different values?). Is that right?
So as I understand, I need a 32x32-matrix with the diagonal being the standard errors from the studies and the off-diagonal values being my educated guess about the within-study covariance between the outcomes. Is that right?

Thank you once more for the help!

Best,
Matthias

-----Original Message-----
From: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl<mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>>
Sent: torstai 3. toukokuuta 2018 20.45
To: Aulbach, Matthias B <matthias.aulbach using helsinki.fi<mailto:matthias.aulbach using helsinki.fi>>; r-sig-meta-analysis using r-project.org<mailto:r-sig-meta-analysis using r-project.org>
Subject: RE: Between study correlation between two different outcomes

Dear Matthias,

Can you explain the data structure a bit more? I assume you have two rows for each level of 'ID', the first row for outcome 1 and the second row for outcome 2 and that 'type_outcome' is a dummy variable to indicate the outcome. Is that correct?

Then the output should include the correlation between the underlying true effects (rho). You can get a CI for this with the confint() function. To test its significance, you can conduct a likelihood ratio test. This should do it:

rp <- rma.mv<http://rma.mv>(yi, vi, data=pd, mod = ~ factor(type_outcome) -1, random= ~ type_outcome | ID)
rp0 <- rma.mv<http://rma.mv>(yi, vi, data=pd, mod = ~ factor(type_outcome) -1, random= ~ type_outcome | ID, rho=0) anova(rp, rp0) confint(rp, rho=1)

Two notes:

The model you are fitting uses struct="CS" by default. This assumes that the amount of heterogeneity is the same for the two outcomes, which may not be appropriate. So you might want to use struct="UN".

Also, I assume the two outcomes are measured in the same subjects. In that case, the sampling errors of the two outcomes are correlated. So the V matrix (the second argument for the rma.mv<http://rma.mv>()) function is not diagonal, but also should include the covariances. If you do not account for this, the correlation between the underlying true effects is very likely an overestimate.

Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org<mailto:r-sig-meta-analysis-bounces using r-project.org>] On Behalf Of Aulbach, Matthias B
Sent: Thursday, 03 May, 2018 16:52
To: r-sig-meta-analysis using r-project.org<mailto:r-sig-meta-analysis using r-project.org>
Subject: [R-meta] Between study correlation between two different outcomes

Hi,

I am conducting a meta-analysis, using the great metafor package. I have run into the problem of dependency when using more than one outcome from the same set of studies. In an earlier attempt, I had run two univariate meta-analyses and then used the outcome of one outcome as a predictor in a meta-regression with the other outcome as the dependent variable. But that ignores the within-study correlation between the two, so I'd like to improve that. So I'd like to handle that using a multivariate meta-analysis, using rma.mv<http://rma.mv> and this line of code (with yi denoting the effect sizes, vi their standard errors, type_outcome the kind of outcome that was measured, and ID as the study identifier):

rp <- rma.mv<http://rma.mv>(yi, vi, data=pd, mod = ~ factor(type_outcome) -1, random= ~ type_outcome | ID)

This nicely gives me the different effects for the two kinds of outcomes. However, what I am so desperately interested in is the between-study correlation between the two outcomes, i.e. if there's a strong effect on one outcome, is there also a strong effect on the other (or not)? Is there a way to get that information, including confidence intervals and a significance test for the correlation coefficient?

Any kind of advice is deeply appreciated!

Best,

Matthias

_______________________________________________
R-sig-meta-analysis mailing list
R-sig-meta-analysis using r-project.org<mailto:R-sig-meta-analysis using r-project.org>
https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis

_______________________________________________
R-sig-meta-analysis mailing list
R-sig-meta-analysis using r-project.org<mailto:R-sig-meta-analysis using r-project.org>
https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis


	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list