[R-meta] Construct the covariance-matrices of different effect sizes

Tzlil Shushan tz|||21092 @end|ng |rom gm@||@com
Fri Jan 8 00:16:19 CET 2021


Dear Wolfgang and James,

Apologise for the long assay in advance..

In my meta-analysis I obtained different effect sizes coming
from test-retest and correlational designs. Accordingly, I performed 4
different meta-analyses for each effect size:

Raw mean difference of test-retest
Standard deviation (using Nakagawa et al. 2015 approach) of test-retest
Intraclass correlation (transformed to z fisher values) of test-retest
Pearson correlation coefficient (transformed to z fisher values) derived
from the same test against criterion measure.

Because many studies meeting inclusion criteria provided more than one
effect size through various ways of repeated measures (for example,
multiple intensities of the test, repeated measures across the year), which
all based on a common sample of participants, I treated each unique sample
as an independent study (NOTE: this approach serves our purposes on the
best way and adding further level will results in low number of
clusters–which I don't want, given the use of RVE).

Thanks to the great discussions in this group, we've done the following:
(1) used rma.mv() to estimate the overall average estimate and the variance
in hierarchical working model. The same for meta-regressions we performed.

(2) Compute robust variance estimates with robust() and coef_test()
functions, clustering at the level of studies (the same is true for both
overall models and meta-regression).

However, after reading some threads in the groups in the last weeks
https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2021-January/002565.html
https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-February/000647.html
and more...I think that one step further is to provide variance-covariance
matrices for each meta-analysis before step 1 and 2 noted above.

In this regard I have some other questions:

(1) Is it compulsory to create (an estimate) variance-covariance given the
structure of my dataset?

(2) IF YES, I'm not sure if I can use the same covariance formulas for all
effect sizes. For example, impute_covariance_matrix() from clubSandwich can
work fine with all effect sizes (mean diff, SD, icc etc.)? or should I
estimate the covariance-matrix with a unique function for each effect size?

Based on reading and suggestions:
• I used impute_covariance_matrix() for mean difference.

• For standard deviation I constructed the formula below:
calc.v <- function(x) {
v <- matrix(r^2/(2*x$ni[1]-1), nrow=nrow(x), ncol=nrow(x))
diag(v) <- x$vi
v
}
V <- bldiag(lapply(split(dat, dat$study), calc.v))
http://www.metafor-project.org/doku.php/analyses:gleser2009

• for icc and pearson correlation I've looked at this
https://wviechtb.github.io/metafor/reference/rcalc.html but I couldn't
create something which is appropriate to my dataset (I don't really know
how to specify var1 and var2).

With this regard, I created a sensitivity analysis (with 0.3, 0.5, 0.7 and
0.9) which revealed similar overall estimates (also similar to the working
models without covariance-matrix), albeit, changed a bit the magnitude of
sigma2.1 and sigma2.2

I'll be thankful to get any thoughts..

Kind regards and thanks in advance!

Tzlil

Tzlil Shushan | Sport Scientist, Physical Preparation Coach

BEd Physical Education and Exercise Science
MSc Exercise Science - High Performance Sports: Strength &
Conditioning, CSCS
PhD Candidate Human Performance Science & Sports Analytics

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list