[R-meta] Subgroup correlated effects working model with additional random effects

Andrea Asgeirsdottir @@ge|r@d @end|ng |rom cb@@mpg@de
Fri Aug 23 14:47:50 CEST 2024


Hi James,

Thank you for your comments, they have been really helpful. For the second model, I am leaning towards the model you suggested with random effects 
random = list(~ DomainID | StudyID, ~ DomainID | ESID), 
since I am interested in differences between domains. 
I am still unsure whether including cross-classified random effects for tasks is important. Tasks are nested within domains but the same tasks are included in many different studies. Below is a datastructure capturing the structure of my data: 
dataStructure <- structure(list(StudyID = c(1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 
2L, 2L, 2L, 3L, 3L, 3L, 3L), DomainID = c(3L, 3L, 3L, 1L, 1L, 
1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 3L), DomainName = c("WM", 
"WM", "WM", "Inhibition", "Inhibition", "Inhibition", "CF", "CF", 
"CF", "Inhibition", "Inhibition", "Inhibition", "Inhibition", 
"CF", "CF", "WM"), Task = c(7L, 8L, 8L, 1L, 1L, 2L, 4L, 4L, 5L, 
1L, 2L, 2L, 3L, 6L, 4L, 8L), TaskName = c("N-back", "Spatial WM", 
"Spatial WM", "Stop-signal", "Stop-signal", "Stroop", "WCST", 
"WCST", "Trail making test", "Stop-signal", "Stroop", "Stroop", 
"Go/No-Go", "Set-shift task", "WCST", "Spatial WM"), ESID = 1:16, 
    ESTYPE = c("Response time", "Accuracy", "Response time", 
    "SSRT", "Accuracy", "Response time", "Perseverative error", 
    "Random error", "Response time", "SSRT", "Accuracy", "Interference effect", 
    "Commission errors", "Perseverative error", "Perseverative error", 
    "Accuracy")), class = "data.frame", row.names = c(NA, -16L
))

Where the domain Inhibition includes the Stop signal (1), Stroop (2), and Go/No-Go (3) tasks 
Cognitive flexibility includes WCST (4), Trail making test (5), and Set-shift task (6)
Working memory includes N-back (7) and Spatial WM (8). 

Is including cross-classified random effects necessary if the focus of my question is on domains?

Best wishes,
Andrea



----- Original Message -----
> From: "James Pustejovsky" <jepusto using gmail.com>
> To: "R Special Interest Group for Meta-Analysis" <r-sig-meta-analysis using r-project.org>
> Cc: "Andrea Asgeirsdottir" <asgeirsd using cbs.mpg.de>
> Sent: Wednesday, 21 August, 2024 16:01:26
> Subject: Re: [R-meta] Subgroup correlated effects working model with additional random effects

> Hi Andrea,
> 
> Comments inline below.
> 
> James
> 
> On Mon, Aug 19, 2024 at 6:31 AM Andrea Asgeirsdottir via
> R-sig-meta-analysis <r-sig-meta-analysis using r-project.org> wrote:
> 
>> Hello all,
>>
>> I am conducting a meta-analysis on executive functions (EF) in
>> adolescents. The meta-analysis includes studies that compare two groups of
>> adolescents on at least one EF domain (inhibition, working memory,
>> cognitive flexibility, decision-making, planning, verbal fluency). Not all
>> studies included in the meta-analysis measure each domain. Most studies use
>> several tasks to measure each domain. Often, more than one effect size is
>> reported for each task. I am having some trouble specifying the working
>> models, which I’ll then combine with RVE methods.
>>
>> The first aim is to answer the question of whether one group is generally
>> more impaired on executive functioning compared to the other group
>> (independent of domains). For this I have the following:
>>
>> Variance-covariance matrix:
>> vEF_overall <- vcalc(
>>   vi = vi,
>>   cluster = StudyID,
>>   obs = ESID,
>>   rho = 0.6,
>>   data = adolEF
>> )
>>
>>
> This assumes a common sampling correlation of 0.6 for every pair of effect
> sizes coming from the same study, without drawing any distinction between
> effects from the same domain or same task versus effects from different
> tasks. If you would like, you could use the type argument of vcalc()
> together with a pair of values for rho to have a different correlation for
> ES from the same domain versus those from different domains. See point #3
> under Details in the vcalc documentation:
> https://wviechtb.github.io/metafor/reference/vcalc.html
> Ideally, you could motivate the choice of rho value(s) based on empirical
> data or at least shared understanding of correlations within and across
> tasks.
> 
> 
> Overall-difference working model:
>> overallEF <- rma.mv(
>>   yi,
>>   V = vEF_overall,
>>   random = ~ ESID | StudyID,
>>   struct = "HCS",
>>   data = adolEF,
>>   method = "REML",
>>   sparse = TRUE
>> )
>>
>>
> This specification will not work because there is no replication of ESID
> across studies. I would suggest instead
> random = ~ 1 | StudyID / ESID
> which is a "plain vanilla" correlated-and-hierarchical effects model. Or if
> you want to include an intermediate level for tasks:
> random = ~ 1 | StudyID / Task / ESID
> which would be a CHE+ model.
> 
> The second aim is to determine which subdomains show the most pronounced
>> impairments. To compare EF domains, I have specified a variance-covariance
>> matrix with DomainID as subgroup:
>>
>> Variance-covariance matrix for the subgroup correlated effects model:
>> vEF_SCE <- vcalc(
>>   vi = vi,
>>   cluster = StudyID,
>>   subgroup = DomainID
>>   obs = ESID,
>>   rho = 0.6,
>>   data = adolEF
>> )
>>
> 
> See comments above about within/between domain correlations. Otherwise the
> syntax looks right.
> 
> 
>>
>> Differences-between-domains working model (SCE model):
>> domainEF  <- rma.mv(
>>   yi ~ 0 + DomainID,
>>   V = vEF_SCE,
>>   random = list(~ DomainID | StudyID, ~ 1 | Task, ~ 1 | ESID),
>>   struct = "DIAG",
>>   data = adoleEF,
>>   method = "REML",
>>   sparse = TRUE
>> )
>>
>>
> This specification has study-level random effects for each domain, treating
> the domains as independent both within and across studies, task-level
> random effects, and ES-level random effects. A couple of notes here:
> 
> * Specifying ~ 1 | Task will yield one random effect per unique task. If
> Task has common levels across studies, then this will result in
> cross-classified random effects. Is this what you intend? Or did you want
> to treat tasks as nested within studies (as in the CHE+ specification,
> suggested above)?
> 
> * Specifying ~ 1 | ESID will yield one random effect per unique level of
> ESID. Make sure that ESID has a unique level for _every_ observation for
> this syntax to work as intended. This model assumes that the within-study,
> within-domain heterogeneity is the same for every domain of EF. As an
> alternative (and ignoring task-level random effects for the moment), you
> could allow the within-study heterogeneity to differ by domain using
> random = list(~ DomainID | StudyID, ~ DomainID | ESID),
> struct = c("DIAG","DIAG")
> This model would then be equivalent to fitting a CHE model to the subset of
> effects for each domain, as in
> rma.mv(
>  yi,
>  V = vEF_overall,
>  random = ~ 1 | StudyID /. ESID,
>  data = adolEF,
>  subset = DomainID == <specific domain level>,
>  method = "REML",
>  sparse = TRUE
> )
> 
> 
> 
>> Do these specifications seem reasonable? I am unsure about the following:
>> 1)  How to specify the random effects in the second working model. The
>> tasks used to index each domain vary between studies, but each task is
>> usually included in several studies. I followed Pustejovsky & Tipton (2021)
>> for specifying a SCE model, but added random effects for task and effect
>> sizes.
>> 2) Does it make sense to specify two separate variance-covariance matrices
>> for the two working models? I’ve specified struct as “HCS” in the first one
>> since not all studies assess all of the EF domains (after reading this:
>> https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2023-July/004827.html)
>> and as “DIAG”  in the subgroup (domains) model after seeing it specified
>> like that in the example code provided by Pustejovsky & Tipton (
>> https://osf.io/z27wt). Is it advisable to make these more specific, by
>> e.g. including type = Task, since effect size estimates from tasks that tap
>> the same EF domains can be expected to have correlated sampling errors?
>>
>> Best wishes,
>> Andrea
>> ---
>> Doctoral researcher
>> Omega lab, Department of Neurology
>> Max Planck Institute for Human Cognitive and Brain Sciences
>> Stephanstraße 1a
>> 04103 Leipzig, Germany
>>
>> _______________________________________________
>> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
>> To manage your subscription to this mailing list, go to:
>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis



More information about the R-sig-meta-analysis mailing list