[R-meta] Questions re multilevel meta-analysis

Lauren Mary Heidenreich |@uren@he|denre|ch @end|ng |rom @de|@|de@edu@@u
Mon Oct 9 03:42:10 CEST 2023


Hi everyone,

I am using the rma.mv() function to perform a multilevel meta-analysis. It is an individual participant data (IPD) meta-analysis, so I have access to the raw data. I am exploring the relationship between cognitive reserve (CR) indicators (e.g., educational attainment, occupational complexity, pre-morbid IQ) and cognitive outcomes (e.g., working memory, long term memory, visual processing speed etc), among individuals previously infected with COVID-19. Because the moderating effect of CR on cognitive outcomes can only be assessed by including an estimate of brain pathology status, we are specifically exploring how CR and COVID-19 severity (e.g., time in hospital, need for oxygen therapy) interact to predict cognitive outcomes. We would expect to find a negative relationship between the severity of the disease and cognitive outcomes, which is less pronounced in people who have a higher CR (see hypothetical interaction: https://universityofadelaide.box.com/s/c0ehcj70m3765w99xbi6om61udos5czv).

[cid:7fe4ce9b-00b6-4c72-9e73-5dee198341eb]
To assess this relationship we have performed regression models which take the following form:
lm(cognitive outcome ~ age + sex + cognitive reserve * COVID severity)

In a given study, there may be a range of cognitive outcomes, cognitive reserve indicators and COVID severity indicators that were assessed. In order to capture all this information we have repeated the linear model for every possible combination of these (ranging anywhere from a handful to 200 models per study). The size of the main effects and interaction effect were then extracted in the form of semipartial correlations<https://journals.sagepub.com/doi/10.3102/1076998610396901>, which are the effect sizes of interest for the meta-analyses. (A semipartial correlation is essentially a standard correlation which has been corrected for potential confounds). Across 30 studies we have derived approximately 1000 effect sizes for each of the main effects, and the interaction term.

We have then run three multilevel meta-analyses (for the main effect of CR, main effect of COVID severity, and the interaction effect) with the following code:

V = impute_covariance_matrix(vi = df$vi, cluster = df$study_id, r = 0.6)
model <- rma.mv(yi,
              V,
              random = list(~ 1 | study_id/effectsize_id),
              tdist = TRUE,
              method = "REML",
              sparse = TRUE,
              data = data_temp)

and performed a series of moderation analyses which assess the influence of (1) cognitive domain, (2) CR indicator and (3) severity indicator on the results.

moderation <- rma.mv(yi,
                           V,
                           mods = ~ moderator,
                           random = list(~ moderator | study_id, ~1 | effectsize_id),
                           struct ="HCS",
                           tdist = TRUE,
                           method = "REML",
                           sparse = TRUE,
                           control = list(rel.tol=1e-8),
                           data = data_temp_2)

As I have been conducting these analyses, several questions have presented themselves.

(1) Plotting the moderating influence of CR indicator (CRQ scores, Education level, Education years etc) on the main effect of CR resulted in the following: https://universityofadelaide.box.com/s/7r3km1oxdq8l0k8v42o3y7jtblq2ctel

 [cid:e94c07b9-e951-489f-b76c-15ab7b0d332f]

As you can see, the estimated central tendency for CRQ scores is very inflated relative to the individual effect sizes (it is also somewhat inflated for the other CR indicators). I am assuming the estimate for CRQ scores is inflated because 64 effect sizes were derived from only one study? If this is correct, should it be dropped from the moderation analysis? If not, are you able to explain why this has occurred?

(2) In regards to the calculations used to compute V via the impute_covariance_matrix() function, is it possible to manually calculate this from the raw data? If so, are you able to provide resources on how to do this?

(3) Finally, given the unique nature of our meta-analysis and the large number of effect sizes that are derived from often very similar regression models, are there any problems you can see regarding our use of these functions?

Any help with these questions would be greatly appreciated!

Kind regards,
Lauren Heidenreich
PhD Student, University of Adelaide, South Australia


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20231009/147c8b93/attachment-0001.html>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 271326 bytes
Desc: image.png
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20231009/147c8b93/attachment-0002.png>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 35011 bytes
Desc: image.png
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20231009/147c8b93/attachment-0003.png>


More information about the R-sig-meta-analysis mailing list