[R-meta] imputing covariance matrices for meta-analysis of dependent effects
Viechtbauer Wolfgang (SP)
wolfgang.viechtbauer at maastrichtuniversity.nl
Thu Aug 10 22:15:23 CEST 2017
"considering that the formulas given in Gleser & Olkin and Kalaian & Raudenbush are themselves only large-sample approximations."
Indeed, good point. Since I was playing around with this anyway, here is an example:
dat <- data.frame(study=c(1,1), n1i=c(30,30), n2i=c(25,25), m1i=c(84,33), m2i=c(78,32), sd1i=c(10.2,1.4), sd2i=c(11.4,1.3))
dat$spi <- with(dat, sqrt(((n1i-1)*sd1i^2 + (n2i-1)*sd2i^2) / (n1i+n2i-2)))
dat$yi <- with(dat, (m1i - m2i) / spi)
dat$vi <- with(dat, 1/n1i + 1/n2i + yi^2 / (2*(n1i+n2i)))
### covariance according to eq. (10) from Kalaian and Raudenbush (1996)
with(dat, 1/n1i[1] + 1/n2i[1] * 0.7 + (1/2 * yi[1] * yi[2] * 0.7^2) / (n1i[1] + n2i[1]))
impute_covariance_matrix(vi = dat$vi, cluster = dat$study, r = 0.7)
I wouldn't lose any sleep over the difference.
As for the three-level model being used for multivariate analyses -- there is also:
Van den Noortgate, W., Lopez-Lopez, J. A., Marin-Martinez, F., & Sanchez-Meca, J. (2015). Meta-analysis of multiple outcomes: A multilevel approach. Behavior Research Methods, 47(4), 1274-1294.
Personally, I do not consider this approach fully sufficient. For example, it assumes that the amount of heterogeneity is the same for all outcomes. That is not something I would want to assume a priori. I can easily construct examples where this will lead to inadequate performance of the multilevel approach (e.g., the CI for a more heterogeneous outcome will be too narrow and too wide for a less heterogeneous outcome). If the data are multivariate, I would like people to analyze them as such.
Best,
Wolfgang
-----Original Message-----
From: James Pustejovsky [mailto:jepusto at gmail.com]
Sent: Thursday, August 10, 2017 21:34
To: Viechtbauer Wolfgang (SP)
Cc: r-sig-meta-analysis at r-project.org
Subject: Re: [R-meta] imputing covariance matrices for meta-analysis of dependent effects
Wolfgang,
Thanks for your thoughts. I agree that the covariance formula I'm using is an approximation, and would be most appropriate for use in conjunction with cluster-robust variance estimation. It might be more accurate to describe this method as imputing a correlation between the effect size estimates, rather than a correlation between the outcomes. In practice, I doubt that there will be much difference though, particularly considering that the formulas given in Gleser & Olkin and Kalaian & Raudenbush are themselves only large-sample approximations.
Regarding your concern about using three-level models in this context, I have seen this method cropping up recently as well, with citations to the following paper:
Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2013). Three-level meta-analysis of dependent effect sizes. Behavior Research Methods, 45(2), 576–594. https://doi.org/10.3758/s13428-012-0261-6
The authors argue that the three-level model is actually robust to the mis-specification problem you noted. However, the simulation evidence that they present is limited to a simple bivariate meta-analysis model with no covariates. I am not sure whether the robustness property would hold under more complicated models.
James
On Thu, Aug 10, 2017 at 2:11 PM, Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Hi James,
This is indeed useful.
However, I am not sure if you are computing the covariances correctly. You are essentially using:
covariance = correlation * sqrt(variance1 * variance2)
But apparently, if one goes through the derivation for the covariance for standardized mean differences, one ends up with a different equation (equation 10 in Kalaian and Raudenbush, 1996). In fact, the equation for the covariance depends on the measure used. See, for example:
Wei, Y., & Higgins, J. P. (2013). Estimating within-study covariances in multivariate meta-analysis with multiple outcomes. Statistics in Medicine, 32(7), 1191-1205.
Historical side note: Interestingly, the 'covariance = correlation * sqrt(variance1 * variance2)' equation for standardized mean differences was also used in:
Raudenbush, S. W., Becker, B. J., & Kalaian, H. (1988). Modeling multivariate effect sizes. Psychological Bulletin, 103, 111-120.
but this was later corrected in Kalaian and Raudenbush (1996) based on Gleser and Olkin (1994).
In practice, it probably makes relatively difference how exactly one computes those covariances, esp. if one is 'guestimating' the correlation between the measures anyway (and then follows things up with some kind of cluster-robust approach, as you describe on your blog). As far as I am concerned, it is important though that one actually computes some kind of covariances (to get a better 'working' var-cov matrix to begin with). I am seeing an increasing number of papers where multiple effect size estimates based on the same sample (so, multivariate data) are being analyzed using a multilevel model like the one described by Konstantopoulos (2011). But that model assumes that sampling errors are uncorrelated, so this is a misapplication of that model. That is also why I added this at one point:
http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011#uncorrelated_sampling_errors
As for the different results reported in Kalaian and Raudenbush (1996) and the ones obtained with metafor -- a couple years ago, I also entered those data (from Table 1) and tried to re-analyze the dataset (as I was planning to include that dataset also in metafor) and ran into similar discrepancies. In fact, if one re-creates the scatterplots that are shown in Figure 1 from the data reported in Table 1, it becomes clear that there must be several printing errors. So, it's not really possible to reproduce the results from that paper, which is unfortunate, since it would be a nice illustration of the multivariate approach.
Best,
Wolfgang
--
Wolfgang Viechtbauer, Ph.D., Statistician | Department of Psychiatry and
Neuropsychology | Maastricht University | P.O. Box 616 (VIJV1) | 6200 MD
Maastricht, The Netherlands | +31 (43) 388-4170 | http://www.wvbauer.com
-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of James Pustejovsky
Sent: Thursday, August 10, 2017 17:59
To: Michael Dewey
Cc: r-sig-meta-analysis at r-project.org
Subject: Re: [R-meta] imputing covariance matrices for meta-analysis of dependent effects
Michael,
I was not aware of the metavcov package, so thank you for pointing it out.
On first glance, it looks like the metavcov functions are configured based
on the assumption that you have detailed information about the correlations
between outcomes for each study (i.e., it requires a list of correlation
matrices as input). The function from my previous message is a simpler
utility function, for use when you need to make more or less ad hoc
assumptions about the correlations. So I would say that it does complement
the metavcov package, but I would welcome corrections if this is not an
accurate assessment.
Best,
James
On Thu, Aug 10, 2017 at 10:43 AM, Michael Dewey <lists at dewey.myzen.co.uk>
wrote:
> Dear James
>
> Not sure how relevant this is but does it complement in any way the
> package https://CRAN.R-project.org/package=metavcov ? I have not used it
> by the way.
>
> Michael
>
> On 10/08/2017 15:04, James Pustejovsky wrote:
>
>> All,
>>
>> A common problem in multivariate meta-analysis is that the information
>> needed to calculate the correlation between effect size estimates is not
>> reported in available sources, even when the variances of the estimates
>> can
>> be calculated. One approach to handling this situation is to simply make
>> an
>> informed guess about the correlation between the effect sizes. I use this
>> approach fairly often and have written a function that makes some of the
>> calculations easier. The function calculates a block-diagonal
>> variance-covariance matrix based on the sampling variances and a guess
>> about the degree of correlation. More details available here:
>>
>> http://jepusto.github.io/imputing-covariance-matrices-for-
>> multi-variate-meta-analysis
>>
>> There's nothing innovative about the methods I describe, but I figured
>> that
>> others might find the function useful. I would welcome comments,
>> questions,
>> or debate about the utility of the approach I used.
>>
>> Cheers,
>> James
More information about the R-sig-meta-analysis
mailing list