[R-meta] Moderator/predictor in multi-level meta-analysis

Lukasz Stasielowicz |uk@@z@@t@@|e|ow|cz @end|ng |rom un|-o@n@brueck@de
Thu Apr 7 12:33:27 CEST 2022


Dear Paul,

such discrepancies are to be expected when making group comparisons (not 
only in the meta-analytic context). Subgroup analyses based on separate 
data sets will sometimes lead to different estimates than the combined 
data set, because the latter contains more information. Therefore, group 
comparisons within one model are often recommended.

In the multi-level context there is a phenomenomen called shrinkage, 
which reduces extreme estimates. It is often regarded as a good thing, 
because extreme estimates might be unreliable. The extent of the 
shrinkage depends on the data and random effects that were included 
which could explain different estimates across analyses.


Best,
Lukasz
-- 
Lukasz Stasielowicz
Osnabrück University
Institute for Psychology
Research methods, psychological assessment, and evaluation
Seminarstraße 20
49074 Osnabrück (Germany)


Am 07.04.2022 um 12:00 schrieb r-sig-meta-analysis-request using r-project.org:
> Send R-sig-meta-analysis mailing list submissions to
> 	r-sig-meta-analysis using r-project.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> or, via email, send a message with subject or body 'help' to
> 	r-sig-meta-analysis-request using r-project.org
> 
> You can reach the person managing the list at
> 	r-sig-meta-analysis-owner using r-project.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of R-sig-meta-analysis digest..."
> 
> 
> Today's Topics:
> 
>     1. Moderator/predictor in multi-level meta-analysis (Hanel, Paul H P)
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Wed, 6 Apr 2022 22:51:11 +0000
> From: "Hanel, Paul H P" <p.hanel using essex.ac.uk>
> To: "r-sig-meta-analysis using r-project.org"
> 	<r-sig-meta-analysis using r-project.org>
> Subject: [R-meta] Moderator/predictor in multi-level meta-analysis
> Message-ID:
> 	<DB6PR0601MB23578B5ADC31A20006821A4DAEE79 using DB6PR0601MB2357.eurprd06.prod.outlook.com>
> 	
> Content-Type: text/plain; charset="utf-8"
> 
> Hello,
> 
> When running a standard random effects meta-analysis with one categorical dichotomous predictor/moderator, the estimate of the moderator is very similar to the difference that you get when running a random effects meta-analysis for each level of the predictor separately. This is however different to when running a multi-level meta-analysis.
> 
> I am running a meta-analysis with over 300 effect sizes in which all studies have a control and a treatment group. The mean differences between the control and the intervention group are quantified with Hedges' gs. One moderator/predictor of these differences is sample type: students vs non-students. When using a standard random-effects model we get:
> dfm <- subset(df, sample_type == 0) # sample_type has two levels: students (coded as 0) vs other (coded as 1 in sample_type variable)
> dft <- subset(df, sample_type == 1)
> rma(yi, vi,  data = dfm) # gives us g = .3855. yi are Hedges' gs.
> rma(yi, vi,  data = dft) # gives us g = .59.
> rma(yi, vi, mods = sample_type, data=df) # Now full dataset with sample_type as categorical moderator/predictor. Estimate = .21, which is very close to .59 - .3855 = .20. Also, the intercept is very close sample_type = 0, intercept = .3861.
> 
> However, when switching to a multi-level model with effect sizes nested in studies nested in papers, things get more complicated. Using the same datasets as above we get
> rma.mv(yi, vi, random = list(~ 1 | effectID, ~ 1 | StudyID, ~ 1 | PaperID), tdist = TRUE, data=dfm) # gives us g = .45, 95%-CI [.33, .59]
> rma.mv(yi, vi, random = list(~ 1 | effectID, ~ 1 | StudyID, ~ 1 | PaperID), tdist = TRUE, data=dft) # gives us g = .61, 95%-CI [.48, .73]
> rma.mv(yi, vi, mods = ~ sample_type, random = list(~ 1 | effectID, ~ 1 | StudyID, ~ 1 | PaperID), tdist = TRUE, data=df) # Now full dataset with sample_type as categorical moderator/predictor. Estimate = .08, 95%-CI[-.07, .24] which is quite a bit off from .61 - .45 = .16. Interestingly, the intercept is also off from sample_type = 0. Intercept = .48, even though I assumed it would be .45.
> 
> In short, why is the estimate for the multi-level moderator test not similar to the difference between the two effect sizes obtained separately for each of the two subgroups - assuming there is no error in the code above? When looking at the confidence intervals, the difference between students vs others should be significant, but including the sample_type as moderator does not yield a significant finding.
> 
> I got similar discrepancies when using a different moderator.
> 
> Thank you,
> Paul
> 
> 
> 	[[alternative HTML version deleted]]
> 
> 
> 
> 
> ------------------------------
> 
> Subject: Digest Footer
> 
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> 
> 
> ------------------------------
> 
> End of R-sig-meta-analysis Digest, Vol 59, Issue 2
> **************************************************



More information about the R-sig-meta-analysis mailing list