[R-meta] Handling dependencies among multiple independent and dependent variables

Jens Schüler jens.schueler at wiwi.uni-kl.de
Wed Apr 4 22:04:58 CEST 2018


I switched over to the devel version, dropped one variable (will not occur
in the path model anyways) and the error/warning about the ratio of sampling
variances does no longer occur. However, the PD issue is still a thing. 
To mess around with things I used "UN" which just stopped due to not
achieving convergence:
Optimizer (nlminb) did not achieve convergence (convergence = 1).
Timing stopped at: 2.274e+05 2.057e+04 4.137e+04

As I am a bit out of my depth here, what would/could be a suitable option to
choose for struct, given that the 422 primary studies drew random or
purposive samples of firms from some populations and mostly relied on cross
sectional data?

Turning towards the performance aspect, using the Microsoft R Version with
Intel's MKL does seem to speed things for AMD CPUs but (of course) not as
much as it does for Intel. If someone does not want to use the Microsoft
thing and/or wants get more out of an AMD CPU and runs Windows, it seems
like as if you can compile and build your own R with openBLAS:

https://www.avrahamadler.com/r-tips/build-openblas-for-windows-r64/


Best
Jens


-----Ursprüngliche Nachricht-----
Von: Viechtbauer Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> 
Gesendet: Mittwoch, 4. April 2018 00:03
An: Jens Schüler <jens.schueler at wiwi.uni-kl.de>;
r-sig-meta-analysis at r-project.org
Betreff: RE: Handling dependencies among multiple independent and dependent
variables

If you are trying to estimate an 'unstructured' var-cov matrix for 8 or 9
variables, then you are looking at 36 or 45 parameters. That is not a
trivial optimization problem and could take a long time.

Depending on the model you are fitting, the V matrix itself does not have to
be PD, as long as the marginal var-cov matrix is. But you are more likely to
run into problems if V is not PD.

With respect to that error: In the 'devel' version, I did turn that error
into a warning a while ago, so it will run, but the warning should be taken
serious -- the results might not be trustworthy.

Best,
Wolfgang

-----Original Message-----
From: Jens Schüler [mailto:jens.schueler at wiwi.uni-kl.de]
Sent: Tuesday, 03 April, 2018 23:42
To: Viechtbauer Wolfgang (SP); r-sig-meta-analysis at r-project.org
Subject: AW: Handling dependencies among multiple independent and dependent
variables

Hi Wolfgang,

I drew 9 variables of interest from 422 samples and 8 of these variables
will make it into the MASEM.
Some of these samples report "multiple measurements" on one or more of these
variables e.g. return on investment and return on assets for financial
performance - hence the question about the dependency handling.
However, the correlations are reported very inconsistently across these
samples, I have 2063 observations drawn from these samples and the V matrix
produced by rmat turned out to be non-positive definite.
For fun (self educational reasons) I used the nearPD function on it just to
test how things go but then the rma.mv function threw an error: Ratio of
largest to smallest sampling variance extremely large. Cannot obtain stable
results.

I guess the data I have is too thin/imperfect to properly account for
dependencies in such an advanced way.

Best
Jens

-----Ursprüngliche Nachricht-----
Von: Viechtbauer Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl>
Gesendet: Dienstag, 3. April 2018 23:04
An: Jens Schüler <jens.schueler at wiwi.uni-kl.de>;
r-sig-meta-analysis at r-project.org
Betreff: RE: Handling dependencies among multiple independent and dependent
variables

Glad you found that page -- I would have direct you there anyway.

How many parameters are you actually trying to estimate?

Best,
Wolfgang

-----Original Message-----
From: Jens Schüler [mailto:jens.schueler at wiwi.uni-kl.de]
Sent: Tuesday, 03 April, 2018 3:45
To: Viechtbauer Wolfgang (SP); r-sig-meta-analysis at r-project.org
Subject: AW: Handling dependencies among multiple independent and dependent
variables

Nevermind, I just went through your info on speeding up model fitting: 
http://www.metafor-project.org/doku.php/tips:speeding_up_model_fitting

Even though I am using an AMD R5 processor with 6 physical cores, I decided
to give the MKL avenue a shot and well, it cranked the CPU usage up from
about 7% to 55% - hopefully this speeds things up.

Best
Jens

-----Ursprüngliche Nachricht-----
Von: R-sig-meta-analysis <r-sig-meta-analysis-bounces at r-project.org> Im
Auftrag von Jens Schüler
Gesendet: Montag, 2. April 2018 22:17
An: Viechtbauer Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl>;
r-sig-meta-analysis at r-project.org
Betreff: Re: [R-meta] Handling dependencies among multiple independent and
dependent variables

Hi Wolfgang,

after rearranging my coding sheet, the rmat function worked like a charm (of
course I screwed up here and there before I got it right).
However, currently I am wondering about the computational performance of
matrix calculations in R.

My data consists of ~ 1700 observations drawn from 422 samples and the
rma.mv function is currently up and running for over 5 hours.
I use the latest base version of R, together with R Studio, and have a
potent CPU in my desktop - of which R only uses about 7%.
Thus, are the calculations really that lengthy/tedious or is it more likely
that I still screwed something up?

Best
Jens

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 7675 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20180404/b330753e/attachment.p7s>


More information about the R-sig-meta-analysis mailing list