[R-meta] Subgroup correlated effects working model with additional random effects
James Pustejovsky
jepu@to @end|ng |rom gm@||@com
Wed Aug 21 18:01:26 CEST 2024
Hi Andrea,
Comments inline below.
James
On Mon, Aug 19, 2024 at 6:31 AM Andrea Asgeirsdottir via
R-sig-meta-analysis <r-sig-meta-analysis using r-project.org> wrote:
> Hello all,
>
> I am conducting a meta-analysis on executive functions (EF) in
> adolescents. The meta-analysis includes studies that compare two groups of
> adolescents on at least one EF domain (inhibition, working memory,
> cognitive flexibility, decision-making, planning, verbal fluency). Not all
> studies included in the meta-analysis measure each domain. Most studies use
> several tasks to measure each domain. Often, more than one effect size is
> reported for each task. I am having some trouble specifying the working
> models, which I’ll then combine with RVE methods.
>
> The first aim is to answer the question of whether one group is generally
> more impaired on executive functioning compared to the other group
> (independent of domains). For this I have the following:
>
> Variance-covariance matrix:
> vEF_overall <- vcalc(
> vi = vi,
> cluster = StudyID,
> obs = ESID,
> rho = 0.6,
> data = adolEF
> )
>
>
This assumes a common sampling correlation of 0.6 for every pair of effect
sizes coming from the same study, without drawing any distinction between
effects from the same domain or same task versus effects from different
tasks. If you would like, you could use the type argument of vcalc()
together with a pair of values for rho to have a different correlation for
ES from the same domain versus those from different domains. See point #3
under Details in the vcalc documentation:
https://wviechtb.github.io/metafor/reference/vcalc.html
Ideally, you could motivate the choice of rho value(s) based on empirical
data or at least shared understanding of correlations within and across
tasks.
Overall-difference working model:
> overallEF <- rma.mv(
> yi,
> V = vEF_overall,
> random = ~ ESID | StudyID,
> struct = "HCS",
> data = adolEF,
> method = "REML",
> sparse = TRUE
> )
>
>
This specification will not work because there is no replication of ESID
across studies. I would suggest instead
random = ~ 1 | StudyID / ESID
which is a "plain vanilla" correlated-and-hierarchical effects model. Or if
you want to include an intermediate level for tasks:
random = ~ 1 | StudyID / Task / ESID
which would be a CHE+ model.
The second aim is to determine which subdomains show the most pronounced
> impairments. To compare EF domains, I have specified a variance-covariance
> matrix with DomainID as subgroup:
>
> Variance-covariance matrix for the subgroup correlated effects model:
> vEF_SCE <- vcalc(
> vi = vi,
> cluster = StudyID,
> subgroup = DomainID
> obs = ESID,
> rho = 0.6,
> data = adolEF
> )
>
See comments above about within/between domain correlations. Otherwise the
syntax looks right.
>
> Differences-between-domains working model (SCE model):
> domainEF <- rma.mv(
> yi ~ 0 + DomainID,
> V = vEF_SCE,
> random = list(~ DomainID | StudyID, ~ 1 | Task, ~ 1 | ESID),
> struct = "DIAG",
> data = adoleEF,
> method = "REML",
> sparse = TRUE
> )
>
>
This specification has study-level random effects for each domain, treating
the domains as independent both within and across studies, task-level
random effects, and ES-level random effects. A couple of notes here:
* Specifying ~ 1 | Task will yield one random effect per unique task. If
Task has common levels across studies, then this will result in
cross-classified random effects. Is this what you intend? Or did you want
to treat tasks as nested within studies (as in the CHE+ specification,
suggested above)?
* Specifying ~ 1 | ESID will yield one random effect per unique level of
ESID. Make sure that ESID has a unique level for _every_ observation for
this syntax to work as intended. This model assumes that the within-study,
within-domain heterogeneity is the same for every domain of EF. As an
alternative (and ignoring task-level random effects for the moment), you
could allow the within-study heterogeneity to differ by domain using
random = list(~ DomainID | StudyID, ~ DomainID | ESID),
struct = c("DIAG","DIAG")
This model would then be equivalent to fitting a CHE model to the subset of
effects for each domain, as in
rma.mv(
yi,
V = vEF_overall,
random = ~ 1 | StudyID /. ESID,
data = adolEF,
subset = DomainID == <specific domain level>,
method = "REML",
sparse = TRUE
)
> Do these specifications seem reasonable? I am unsure about the following:
> 1) How to specify the random effects in the second working model. The
> tasks used to index each domain vary between studies, but each task is
> usually included in several studies. I followed Pustejovsky & Tipton (2021)
> for specifying a SCE model, but added random effects for task and effect
> sizes.
> 2) Does it make sense to specify two separate variance-covariance matrices
> for the two working models? I’ve specified struct as “HCS” in the first one
> since not all studies assess all of the EF domains (after reading this:
> https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2023-July/004827.html)
> and as “DIAG” in the subgroup (domains) model after seeing it specified
> like that in the example code provided by Pustejovsky & Tipton (
> https://osf.io/z27wt). Is it advisable to make these more specific, by
> e.g. including type = Task, since effect size estimates from tasks that tap
> the same EF domains can be expected to have correlated sampling errors?
>
> Best wishes,
> Andrea
> ---
> Doctoral researcher
> Omega lab, Department of Neurology
> Max Planck Institute for Human Cognitive and Brain Sciences
> Stephanstraße 1a
> 04103 Leipzig, Germany
>
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list