[R-meta] Multilevel model between-study variance

Frederik Zirn |reder|k@z|rn @end|ng |rom un|-kon@t@nz@de
Mon Jul 8 12:07:39 CEST 2024


Dear R-sig-meta-analysis community, 

I am a PhD student conducting a meta-analysis of 12 studies with 41 effect sizes. There is dependency among my effect sizes as several studies measure the same outcome at multiple time points or multiple outcomes are measured in the same study. 

My first approach was to aggregate effect sizes per study using the agg-function of the Mad package. 
all_study_designs_combined_agg <- agg(id = Study, es = yi, var = vi, method = "BHHR", cor = 0.59, data=all_study_designs_combined)

cor = 0.59 is based on the correlations reported within a study of my meta-analysis. I am conducting sensitivity analyses with other values. 

1) However, aggregating within studies is no longer considered state-of-the-art practice, correct? Or is this still a valid approach to handle dependent effect sizes?

Consequently, I aimed to create a multilevel model. Here is the code I used: 

Fitting a CHE Model with Robust Variance Estimation
constant sampling correlation assumption rho <- 0.59

constant sampling correlation working model 
V <- with(all_study_designs_combined, 
          impute_covariance_matrix(vi = vi,
                                   cluster = Study,
                                   r = rho))

che.model <- rma.mv(yi = yi,
                    V = V,
                    random = ~ 1 | Study/ES_ID,
                    data = all_study_designs_combined)
che.model

robust variance
full.model.robust <- robust(che.model, cluster=all_study_designs_combined$Study, clubSandwich = TRUE)
summary(full.model.robust)

Doing so, I receive the following variance components: 
sigma^2.1  0.0000  0.0000     12     no        Study 
sigma^2.2  0.1286  0.3587     41     no  Study/ES_ID 

2) I have trouble interpreting those findings. Does that indicate that all of the variance in my model comes from within-study variance, and I have no between-study variance? This does not seem plausible to me. Am I overseeing something here? Could that be due to the limited sample size (12 studies)? 

Thanks in advance,
Frederik Zirn
PhD student
Chair of Corporate Education
University of Konstanz



More information about the R-sig-meta-analysis mailing list