[R-meta] Handling dependencies among multiple independent and dependent variables

Jens Schüler jens.schueler at wiwi.uni-kl.de
Tue Apr 3 03:44:32 CEST 2018


Nevermind, I just went through your info on speeding up model fitting: 
http://www.metafor-project.org/doku.php/tips:speeding_up_model_fitting

Even though I am using an AMD R5 processor with 6 physical cores, I decided
to give the MKL avenue a shot and well, it cranked the CPU usage up from
about 7% to 55% - hopefully this speeds things up.


Best
Jens



-----Ursprüngliche Nachricht-----
Von: R-sig-meta-analysis <r-sig-meta-analysis-bounces at r-project.org> Im
Auftrag von Jens Schüler
Gesendet: Montag, 2. April 2018 22:17
An: Viechtbauer Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl>;
r-sig-meta-analysis at r-project.org
Betreff: Re: [R-meta] Handling dependencies among multiple independent and
dependent variables

Hi Wolfgang,

after rearranging my coding sheet, the rmat function worked like a charm (of
course I screwed up here and there before I got it right).
However, currently I am wondering about the computational performance of
matrix calculations in R.

My data consists of ~ 1700 observations drawn from 422 samples and the
rma.mv function is currently up and running for over 5 hours.
I use the latest base version of R, together with R Studio, and have a
potent CPU in my desktop - of which R only uses about 7%.
Thus, are the calculations really that lengthy/tedious or is it more likely
that I still screwed something up?


Best
Jens

-----Ursprüngliche Nachricht-----
Von: Viechtbauer Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl>
Gesendet: Samstag, 24. März 2018 14:50
An: Jens Schüler <jens.schueler at wiwi.uni-kl.de>;
r-sig-meta-analysis at r-project.org
Betreff: RE: Handling dependencies among multiple independent and dependent
variables

I have posted a function that allows for processing data like this. See
here:

https://gist.github.com/wviechtb/700983ab0bde94bed7c645fce770f8e9

Take a look especially at the very last example (note: the bldiag() is in
the metafor package, so load this first). The data here look like this:

   id     yi var1 var2
1   1  0.179  BMI  PUL
2   1  0.396  BMI  SBP
3   1  0.088  PUL  SBP
4   1  0.080  BMI  DPB
5   1 -0.042  DPB  PUL
6   1  0.719  DPB  SBP
7   2  0.179  BMI  PUL
8   2  0.396  BMI  SBP
9   2  0.088  PUL  SBP
10  2  0.080  BMI  DPB
11  2 -0.042  DPB  PUL
12  2  0.719  DPB  SBP

'yi' is the correlation between 'var1' and 'var2'. Then you can use:

sav <- rmat(yi ~ var1 + var2 | id, n=c(20,30), data=dat) sav

(so with n, you denote the sample sizes of study 1 and study 2 and so on).

This will return the dataset plus the full var-cov matrix of the
correlations (a 12x12 matrix here). You can then use rma.mv() for analyzing
these data, setting argument 'V' equal to the var-cov matrix (sav$V).

So, the idea is this: Set up your dataset as above, with var1 and var2
indicating the types of variables. So, for the 4 cases you describe below,
you would have:

id var1    var2    yi
1  orient1 prof1   .
2  orient1 prof1   .
2  orient1 prof2   .
2  prof1   prof2   .
3  orient1 prof1   .
3  orient2 prof1   .
3  orient1 orient2 .
4  orient1 prof1   .
4  orient1 prof2   .
4  orient2 prof1   .
4  orient2 prof2   .
4  prof1   prof2   .
4  orient1 orient2 .

Note: Even though you are only interested in cor(orient, prof), you need the
prof-prof and orient-orient correlations, because those are needed to
compute the covariances. Then use rmat() as above.

For the rma.mv() model, you actually do not want to distinguish between
orient1 and orient2 and prof1 and prof2. So you want to expand your dataset
so that it looks like this:

id var1    var2    yi  type
1  orient1 prof1   .   orient.prof
2  orient1 prof1   .   orient.prof
2  orient1 prof2   .   orient.prof
2  prof1   prof2   .   prof.prof
3  orient1 prof1   .   orient.prof
3  orient2 prof1   .   orient.prof
3  orient1 orient2 .   orient.orient
4  orient1 prof1   .   orient.prof
4  orient1 prof2   .   orient.prof
4  orient2 prof1   .   orient.prof
4  orient2 prof2   .   orient.prof
4  prof1   prof2   .   prof.prof
4  orient1 orient2 .   orient.orient

Then you can use:

rma.mv(yi, sav$V, mods = ~ type - 1, random = ~ type | id, struct="UN",
data=dat)

This will give you the estimated average correlation for orient.prof,
prof.prof, and orient.orient. You are interested in the first (although the
other two types are also interesting).

By coding 'type' in this way, when a study provides multiple orient-prof
correlations, they are in essence pooled together in the model fitting
(while properly accounting for their covarances, since those are in sav$V).

One could also approach this from a 'MASEM' (meta-analytic structural
equation modeling) framework. If you are interested in that approach, take a
look at Mike Cheung's book "Meta-Analysis: A Structural Equation Modeling
Approach" or Suszanne Jak's book "Meta-Analytic Structural Equation
Modelling".

Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org]
On Behalf Of Jens Schüler
Sent: Tuesday, 06 March, 2018 14:50
To: r-sig-meta-analysis at r-project.org
Subject: [R-meta] Handling dependencies among multiple independent and
dependent variables

Hi all,

I am currently working on a meta-analysis on strategic orientations (IV) and
firm performance (DV), using correlational data, and struggle with how I
should best handle dependencies among effect sizes.

Short elaboration of the issue:
The strategic orientation I am interested in consists of three subdimensions
and some studies report correlational data between the subdimensions and
e.g. profitability, whereas other studies report only a correlation between
the whole construct and profitability. Moreover, some studies use more than
one profitability measure. I am interested in the overall aggregate effect
between the "whole" strategic orientation and "whole" profitability.

This leaves me with the following cases:
1. Whole orientation and single profitability measure (trivial) 2. Whole
orientation and multiple profitability measures 3. Multiple dimensions and
single profitability measure 4. Multiple dimensions and multiple
profitability measures 

Following the mailing list and the metaphor website, I could handle the 2nd
case through nesting the multiple effects in studies and using the
impute_covariance_matrix function of the clubSandwhich package. However, I
am not exactly sure on how to appropriately handle case 3 and 4 and
especially, how to combine 2 to 4 in order to run the analysis on the
aggregate level (whole orientation - whole profitability).

Due to working with correlations, the respective studies report the
correlations between the variables e.g. between the dimensions, between the
profitability measures and the correlations between the dimensions and
profitability measures.

Best
Jens

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 7675 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20180403/da433c2b/attachment-0001.p7s>


More information about the R-sig-meta-analysis mailing list