[R-meta] Meta-analysis when sampling covariance matrices are missing

Viechtbauer Wolfgang (SP) wolfgang.viechtbauer at maastrichtuniversity.nl
Tue Jan 16 18:18:30 CET 2018


Dear Célia,

If I understand you correctly, the general question here is how to analyze the relationship between two effects measured on the same subjects. So, for each study, we have [y_i1, y_i2] and corresponding sampling variances [v_i1, v_i2]. Ideally, we also have the covariance between [y_i1, y_i2], so we have the 2x2 var-cov matrix of the sampling errors for each study. In that case, one can fit a multivariate (or rather: bivariate) model along the lines of Berkey et al. (1998):

http://www.metafor-project.org/doku.php/analyses:berkey1998

And then, based on the var-cov matrix of the random effects, we can estimate the correlation of the underlying true effects (the correlation is given directly in the output). One can even estimate the regression line that describes the linear relationship between the underlying true effects. See, for example:

van Houwelingen, H. C., Arends, L. R., & Stijnen, T. (2002). Advanced methods in meta-analysis: Multivariate approach and meta-regression. Statistics in Medicine, 21(4), 589-624.

Page 601 is the most relevant here.

If you do not know the covariances (and one would assume them to be 0), then this approach is going to give you biased results, so I would not recommended this (and cluster robust methods are not going to help you here).

An easier approach would be to simply treat one of the effects as your outcome and the other as a predictor. Technically, this isn't quite right, since the predictor is measured with error. This will lead to bias of the underlying true relationship (e.g., https://en.wikipedia.org/wiki/Errors-in-variables_models). Things are a bit more complex compared to the 'standard' regression context, since the amount of error actually varies across studies, but it's the same fundamental issue. I haven't given this a lot of thought, but I would assume that the bias will again be negative, so the strength of the relationship will tend to be underestimated. Unless somebody has a better idea, one could argue that if a relationship is still found, then this provides evidence that a relationship does exist, although the actual strength is uncertain.

Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Célia Sofia Moreira
Sent: Wednesday, 10 January, 2018 1:04
To: r-sig-meta-analysis at r-project.org
Subject: Re: [R-meta] Meta-analysis when sampling covariance matrices are missing

Dear all,

I have been reading some of the messages in the list related to my problem,
and I realized that “unknown correlations” is an old topic. Special thanks
to Prof. Wolfgang, James Pustejovsky, and Isabel Schlegel in
https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2017-August/000127.html.
Really helpful!

I also understood that my attempt to perform the multilevel model in my
previous message was wrong. Please, forget the previous questions.

My data are multivariate and so outcomes should be analyzed as such.
However, I only have sampling means and SD, and thus, only variances of the
effects (SMD) are available. So, I will input a covariance matrix
(clubSandwich) and/or RVE (robumeta), always with the indispensable rma.mv
(metafor).

Nevertheless, I would like to investigate a regression between two effects.
However, due to the previous limitation, I have no idea if it is possible,
and, in affirmative case, how to do it. Thus, any recommendations /
suggestions will be much appreciated. In negative case, can the
unstructured correlation matrix obtained with the rma.mv output be used to
assess the strength of the relationship between these effects?

Please, tell me your opinion!

Kind regards 


More information about the R-sig-meta-analysis mailing list