[R-meta] Three-level meta-analysis with different sources of dependency

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Tue Feb 7 21:12:22 CET 2023


Hi again Wilma,

Following up with a very small correction to the example code in my
previous reply. The argument tdist = TRUE is unnecessary in the rma.mv()
code. The comment (copy-pasted from the example script) is also a bit
misleading because tdist = TRUE is not the same thing as the Knapp-Hartung
adjustment.

The revised code for the multi-level meta-analysis model would be as
follows:

overall <- rma.mv(yi, vi,
                  data = df,
                  level = 95,
                  method = "REML",
                  slab = author_year,
                  random = list(~ 1 | study_id, ~ 1 | esid)) |>
  robust(cluster = study_id, clubSandwich = TRUE)
summary(overall)

Or for the correlated-and-hierarchical effects model:

V <- vcalc(vi, cluster=study_id, obs=esid, data=df, rho=0.6)
overall <- rma.mv(yi, V = V,
                  data = df,
                  level = 95,
                  method = "REML",
                  slab = author_year,
                  random = list(~ 1 | study_id, ~ 1 | esid)) |>
  robust(cluster = study_id, clubSandwich = TRUE)
summary(overall)

James


On Tue, Feb 7, 2023 at 11:54 AM James Pustejovsky <jepusto using gmail.com> wrote:

> Hi Wilma,
>
> Combining the multi-level meta-analytic approach with RVE is one fairly
> low-effort way to address the concern of dependent effect sizes. As far as
> implementation, it is simply a matter of running the model results through
> the robust() function in metafor. Here's an example, elaborating on the
> script you linked to:
>
> # Create multilevel meta-analytic object for overall pooled effect
> overall <- rma.mv(yi, vi,
>                   data = df,
>                   level = 95,
>                   method = "REML", # tau-squared estimator
>                   slab = author_year, # study label
>                   tdist = TRUE, # apply Knapp-Hartung adjustment for our
> confidence intervals
>                   random = list(~ 1 | study_id,
>                                 ~ 1 | esid)) # account for dependency in
> the data
> overall_robust <- robust(overall, cluster = study_id, clubSandwich = TRUE)
> summary(overall_robust)
>
> Here's an alternate syntax, using R's pipe operator:
>
> # Create multilevel meta-analytic object for overall pooled effect
> overall <- rma.mv(yi, vi,
>                   data = df,
>                   level = 95,
>                   method = "REML", # tau-squared estimator
>                   slab = author_year, # study label
>                   tdist = TRUE, # apply Knapp-Hartung adjustment for our
> confidence intervals
>                   random = list(~ 1 | study_id,
>                                 ~ 1 | esid)) |>
>   robust(cluster = study_id, clubSandwich = TRUE)
> summary(overall)
>
> With either syntax, you'll need to specify the cluster = argument to tell
> metafor the level at which to cluster the robust variance estimator.
> Setting clubSandwich = TRUE provides small-sample adjustments that have
> better performance characteristics when the number of clusters is limited.
>
> A further step would be to implement the correlated-and-hierarchical
> effects working model rather than the multi-level meta-analysis (which, as
> you noted, assumes independent effect size estimates within studies). The
> idea here is to create an approximate sampling variance-covariance matrix
> for the effect size estimates, to acknowledge that there is some dependence
> in them, even if we're unsure about the exact degree of dependence. You can
> implement this using metafor's vcalc() function. Here's a basic example,
> assuming a correlation of .6 between effect size estimates from the same
> study:
>
> V <- vcalc(vi, cluster=study_id, obs=esid, data=df, rho=0.6)
>
> Once you've got the V matrix, you feed it into the V argument of rma.mv()
> as follows:
> overall <- rma.mv(yi = yi, V = V,
>                   data = df,
>                   level = 95,
>                   method = "REML", # tau-squared estimator
>                   slab = author_year, # study label
>                   tdist = TRUE, # apply Knapp-Hartung adjustment for our
> confidence intervals
>                   random = list(~ 1 | study_id,
>                                 ~ 1 | esid)) |>
>   robust(cluster = study_id, clubSandwich = TRUE)
> summary(overall)
>
> You noted a potential concern that the reason for dependence differs from
> study to study, which suggests that assuming the same level of correlation
> (e.g., rho = .6) isn't very plausible. The vcalc() function has some
> features that would let you make more elaborate assumptions based on timing
> of measurements and such (see the documentation here:
> https://wviechtb.github.io/metafor/reference/vcalc.html). Depending on
> how big your concern is, perhaps it would be worth exploring these
> features. If it's a small feature of the data, however, I think it would be
> pretty reasonable and conventional to use a common correlation assumption,
> since robust variance estimation / inference methods will work even if some
> aspects of the working model aren't correctly specified.
>
> James
>
> On Tue, Feb 7, 2023 at 2:18 AM Wilma Charlott Theilig via
> R-sig-meta-analysis <r-sig-meta-analysis using r-project.org> wrote:
>
>> Dear all,
>>
>> thank you for adding me to the mailing list! Meta-analysis and R-
>> beginner here.
>>
>>
>> I plan to conduct a meta-analysis following a systematic review on the
>> topic "Empathy and Theory of Mind - Do they correlate in children?". My
>> data set consists of correlational data. In total, I have identified 80
>> studies and 204 effect sizes that I could use for the analysis. Since
>> nested effect sizes are available and I do not have any information about
>> the correlations between these nested effect sizes, it is possible to work
>> with either RVE or multi-level analyses.
>>
>> For my research question, a three-level meta-analysis would make the most
>> sense (I want to do a moderator analysis with meanage and assessment type
>> and add "Study" as an additional level).
>>
>> The problem I have, however, is that my effect sizes are dependent for
>> various reasons. I have T1 and T2 data from longitudinal studies, the
>> female, male and overall sample of studies, as well as samples where the
>> correlation between empathy and ToM was measured using the same sample but
>> different instruments.
>>
>> On the metafor website in the example of Konstantopoulos (2011) is stated
>> that "It is important to note that the models used above assume that the
>> sampling errors of the effect size estimates are independent. This is
>> typically an appropriate assumption as long as there is no overlap in the
>> data/individuals used to compute the various estimates. However, when
>> multiple estimates are obtained from the same group of individuals, then
>> this assumption is most certainly violated."
>>
>>
>> I was planning to use the R-script by Gucciardi (2021)
>>
>> https://osf.io/brhsw
>>
>> and was wondering if I could adapt it to account for the different
>> sources of dependency. I read about combining RVE and Multi-level
>> meta-analysis or CHE-models that I could use to solve my problem but I was
>> wondering what the best way (and easiest way) would be?
>>
>> What would be the consequences of just ignoring the different sources of
>> dependency?
>>
>> I am really looking forward for your answers.
>>
>>
>> Best regards
>>
>> Wilma Theilig
>>
>>
>>
>>
>>         [[alternative HTML version deleted]]
>>
>> _______________________________________________
>> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
>> To manage your subscription to this mailing list, go to:
>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list