[R-meta] Cross-Classified Random-Effects Model in rma.mv

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Jan 21 23:06:33 CET 2019


Dear Mark,

I took another look at the article, entered the data, and re-ran the analyses with metafor.

The model with crossed random effects is:

res <- rma.mv(yi, vi, random = list(~ 1 | study/outcome, ~ 1 | subscale), data=dat)
print(res, digits=3)

These are the results:

Multivariate Meta-Analysis Model (k = 68; method: REML)

Variance Components:

           estim   sqrt  nlvls  fixed         factor 
sigma^2.1  0.012  0.111     57     no          study 
sigma^2.2  0.009  0.096     68     no  study/outcome 
sigma^2.3  0.036  0.190      7     no       subscale 

Model Results:

estimate     se    zval   pval   ci.lb  ci.ub 
  -0.038  0.084  -0.452  0.652  -0.202  0.126    

---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

These are identical to the CCREM results in Table 1 (except for the CI for mu, which SAS computes using a Satterthwaite approximation, while the above is based on a standard normal approximation). Note that 'outcome within study' is unique for every row of the dataset (k=68 and nlvls=68 for this random effect), so this is the estimate-level random effect. So, yes, this is exactly the model discussed below.

For completeness sake, the standard multilevel model without a crossed random effect for 'subscale' can be fitted with:

res <- rma.mv(yi, vi, random = ~ 1 | study/outcome, data=dat)
print(res, digits=3)

These are the results:

Multivariate Meta-Analysis Model (k = 68; method: REML)

Variance Components:

           estim   sqrt  nlvls  fixed         factor 
sigma^2.1  0.000  0.000     57     no          study 
sigma^2.2  0.033  0.181     68     no  study/outcome 

Model Results:

estimate     se    zval   pval   ci.lb   ci.ub 
  -0.052  0.026  -1.993  0.046  -0.103  -0.001  * 

---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

These are identical to the HLM results in Table 1 (the 'study/outcome' variance is given as 0.032 in the paper, but that's a minor discrepancy that can happen due to slightly different optimization methods).

Best,
Wolfgang

-----Original Message-----
From: Viechtbauer, Wolfgang (SP) 
Sent: Friday, 18 January, 2019 18:34
To: 'Assink, Mark'; r-sig-meta-analysis using r-project.org
Subject: RE: Cross-Classified Random-Effects Model in rma.mv

Dear Mark,

Indeed,

rma.mv(yi, vi, random = list(~ 1 | study/effectsize, ~ 1 | instrument), data=data)

will be a model with crossed random effects (that is, 'instrument' is not nested, but a crossed random effect).

Whether 'instrument' should be considered nested within studies or treated as a crossed random effect is debatable. Actually, unless the same instrument was used multiple times within at least some of the studies (e.g., study 4 provides three effect sizes, *two* with instrument 1 and one with instrument 2), the model

rma.mv(yi, vi, random = ~ 1 | study/instrument/effectsize, data=data)

is overparameterized (since the instrument-level heterogeneity cannot be distinguished from the effectsize-level heterogeneity).

I haven't read the article by Fernández-Castilla et al. (2018) -- but thanks for bringing it to my attention! -- so I cannot tell you what they propose in their appendices. But hopefully the above is still useful to you (at least I can confirm that the syntax is correct).

Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Assink, Mark
Sent: Friday, 18 January, 2019 16:49
To: r-sig-meta-analysis using r-project.org
Subject: [R-meta] Cross-Classified Random-Effects Model in rma.mv

Dear Wolfgang and other members,

In a recent paper of Fern�ndez-Castilla and colleagues (2018; https://doi.org/10.3758/s13428-018-1063-2), it is explained how a cross-classified random effects model (CCREM) can be fitted in SAS. I was wondering whether and how CCREM's can be fitted in R using the rma.mv function of the metafor package.

Following the above cited paper, suppose you have a meta-analytic structure in which effect sizes are nested within studies:

* Level 1 -> Variability in effect sizes due to sampling variance;
* Level 2 -> Variability in effect sizes extracted from the same studies (i.e., within-study variance);
* Level 3 -> Variability in effect sizes extracted from different studies (i.e., between-study variance).

Let's say that across primary studies multiple/different instruments were used to measure a specific outcome. For example, three effect sizes from study 1 were based on instruments 1, 2, and 3; two effect sizes from study 2 were based on instruments 1 and 4; three effect sizes from study 3 were based on instruments 2, 4, and 5, etc. So, the variable "instrument" can be regarded as a crossed factor.

To model the above structure using the rma.mv function, I would write:

rma.mv(yi, vi, random = list(~ 1 | study/effectsize, ~ 1 | instrument), data=data)

I assume that with this syntax, the clustering or dependency of effect sizes within studies is accounted for, while the variation in effect sizes based on the different instruments that are used (between-instrument variance) is also modeled.

However, I am not sure whether this would be correct as Fern�ndez-Castilla et al. (2018) refer to "random factors nested within studies" in their appendices with SAS codes. I'd say that a variable like "instrument" from the example above would not be nested within studies, because the same instrument(s) are used across studies.

Are my reasoning and R-syntax correct? I highly appreciate any reflection, help, or suggestion.

Best,
Mark


More information about the R-sig-meta-analysis mailing list