[R-meta] Moderator/predictor in multi-level meta-analysis
Hanel, Paul H P
p@h@ne| @end|ng |rom e@@ex@@c@uk
Thu Apr 7 00:51:11 CEST 2022
When running a standard random effects meta-analysis with one categorical dichotomous predictor/moderator, the estimate of the moderator is very similar to the difference that you get when running a random effects meta-analysis for each level of the predictor separately. This is however different to when running a multi-level meta-analysis.
I am running a meta-analysis with over 300 effect sizes in which all studies have a control and a treatment group. The mean differences between the control and the intervention group are quantified with Hedges' gs. One moderator/predictor of these differences is sample type: students vs non-students. When using a standard random-effects model we get:
dfm <- subset(df, sample_type == 0) # sample_type has two levels: students (coded as 0) vs other (coded as 1 in sample_type variable)
dft <- subset(df, sample_type == 1)
rma(yi, vi, data = dfm) # gives us g = .3855. yi are Hedges' gs.
rma(yi, vi, data = dft) # gives us g = .59.
rma(yi, vi, mods = sample_type, data=df) # Now full dataset with sample_type as categorical moderator/predictor. Estimate = .21, which is very close to .59 - .3855 = .20. Also, the intercept is very close sample_type = 0, intercept = .3861.
However, when switching to a multi-level model with effect sizes nested in studies nested in papers, things get more complicated. Using the same datasets as above we get
rma.mv(yi, vi, random = list(~ 1 | effectID, ~ 1 | StudyID, ~ 1 | PaperID), tdist = TRUE, data=dfm) # gives us g = .45, 95%-CI [.33, .59]
rma.mv(yi, vi, random = list(~ 1 | effectID, ~ 1 | StudyID, ~ 1 | PaperID), tdist = TRUE, data=dft) # gives us g = .61, 95%-CI [.48, .73]
rma.mv(yi, vi, mods = ~ sample_type, random = list(~ 1 | effectID, ~ 1 | StudyID, ~ 1 | PaperID), tdist = TRUE, data=df) # Now full dataset with sample_type as categorical moderator/predictor. Estimate = .08, 95%-CI[-.07, .24] which is quite a bit off from .61 - .45 = .16. Interestingly, the intercept is also off from sample_type = 0. Intercept = .48, even though I assumed it would be .45.
In short, why is the estimate for the multi-level moderator test not similar to the difference between the two effect sizes obtained separately for each of the two subgroups - assuming there is no error in the code above? When looking at the confidence intervals, the difference between students vs others should be significant, but including the sample_type as moderator does not yield a significant finding.
I got similar discrepancies when using a different moderator.
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis