[R-meta] Comparing dependent, overlapping correlation coefficients

Viechtbauer, Wolfgang (SP) wolfg@ng@viechtb@uer @ending from m@@@trichtuniver@ity@nl
Tue Aug 14 22:18:22 CEST 2018


You do not need escalc(). The rmat() function gives you the variances along the diagonal of the 'V' matrix.

The variances should be (1 - ri^2)^2 / (ni - 1). You should be able to double-check that these values correspond to your data. Since ni should be the same for r_XY and r_XZ within a study, then it might be that the variances are roughly the same if the two correlations are not all that different. They should not be identical though (unless r_XY and r_XZ are the same).
 
Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Anna-Lena Schubert
Sent: Tuesday, 14 August, 2018 14:20
To: James Pustejovsky
Cc: r-sig-meta-analysis using r-project.org
Subject: Re: [R-meta] Comparing dependent, overlapping correlation coefficients

Hi James,
I used Wolfgang's script on git to calculate the Cov(r_XY, r_XZ) by feeding it Cor(r_YZ). In the next step, I calculated Var(r_XY) and Var(r_XZ) by using the escalc function. However, Var(r_XY) always equals Var(r_XZ) for each study. Does this make sense? 
I nevertheless added all three measures per study into a variance-covariance matrix such as:
                r_XY r_XZ    r_XY r_XZ
r_XY        0.004    0.0001    0    0
r_XZ        0.0001    0.004    0    0
r_XY        0    0    0.008    0.002
r_XZ        0    0    0.002    0.008
Then, I tried to feed everything into a multivariate meta-analysis: 
    res <- rma.mv(yi, V, mods = ~ variableType - 1, random = ~ variableType | studyNum, struct="UN", data=dat, method="ML")
The estimates I get for both of the correlation coefficients correspond closely to those I get when only meta-analyzing one of the variable types, which seems great. However, I'm still somewhat concerned that Var(r_XY) = Var(r_XZ). Do you think there may have been some mistake in my code or does it make sense that these variances are equal? 
Best,
Anna-Lena
            
Am 10.08.2018 um 17:06 schrieb James Pustejovsky:
Anna-Lena, 

The approach that you suggested (putting the data in "long" format and defining an indicator variable for whether Y or Z is the correlate) is just what I would recommend. However, there is a complication in that the estimates r_XY and r_XZ are correlated (correlated correlation coefficients...say that six times fast!), and the degree of correlation depends on r_YZ. 

1) If you have extracted data on r_YZ then you could use this to compute Cov(r_XY, r_XZ) and then do a multivariate meta-analysis. See discussion here:
https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-January/000483.html
And this function for computing the required covariance matrices:
https://gist.github.com/wviechtb/700983ab0bde94bed7c645fce770f8e9
There are at least three further alternatives that might be simpler:

2) If you have r_YZ you could use it to compute the sampling variance of the difference between r_XY and r_XZ, that is:

Var(r_XY - r_XZ) = Var(r_XY) + Var(r_XZ) - 2 * Cov(r_XY, r_XZ)

You could then do a univariate meta-analysis on the difference between correlations.

3) If you do not have r_YZ then you won't be able to estimate Cov(r_XY, r_XZ) very well. You could make a guess about r_YZ and then follow approach (1) or (2) above, using cluster-robust variance estimation to account for the possibly mis-estimated sampling-variance covariance matrix. 

4) Or you could ignore the covariance between r_XY and r_XZ entirely, fit the model to the long data as you describe above, and use cluster-robust variance estimation (clustering by sample) to account for the dependence between r_XY and r_XZ. This is the quickest and dirtiest approach, and the first thing I would try in practice before moving on to the more refined approaches above.

James
 
On Fri, Aug 10, 2018 at 9:21 AM Anna-Lena Schubert <anna-lena.schubert using psychologie.uni-heidelberg.de> wrote:
Dear all,

I want to run a meta-analysis that compares dependent, overlapping
correlation coefficients (i.e., I want to see if X correlates more
strongly with Y than it does with Z). I already ran a meta-analysis
separately for both of these correlations and would now like to compare
those two pooled effect sizes statistically. Confidence intervals of the
two correlations do not overlap (r1 = .18 [.12; .24]; r2 = .32 [.25;
.39]), but I wonder if there may be a more elegant way to compare these
correlations than just based on CIs.

I wonder, for example, if a factorial variable could be used to identify
those correlations in a "long" data format style, and if I could test
for a significant interaction between variable type (Y vs. Z) and the
correlation in a meta-analysis:

    Study    Variable    r
    1    Y    .20
    1    Z    .30
    2    Y    .34
    2    Z    .43

I would greatly appreciate if anyone could tell me if that's a good idea
or could recommend other approaches. Thanks in advance for any offers of
help!

Best,
Anna-Lena


More information about the R-sig-meta-analysis mailing list