[R-meta] Seeking advice on multimoderator meta-regression in multilevel meta-analysis

Lukasz Stasielowicz |uk@@z@@t@@|e|ow|cz @end|ng |rom un|-o@n@brueck@de
Fri May 23 16:58:12 CEST 2025


Dear Maximilian,

As mentioned by Michael Dewey, several of your concerns seem warranted.

Since you’ve asked for potential references, I will provide some below.

Simultaneously including multiple moderators without theoretical 
deliberations could bias the regression coefficients. In the context of 
multiple regression, various articles have shown that spurious 
relationships can emerge, the sign of the regression coefficient can 
change, and wrong conclusions are sometimes made based on a model 
consisting of several predictors (e.g., predictor A is important, 
predictor B is irrelevant). Therefore, one needs to be cautious when 
adjusting for variables in regression models (Cinelli et al., 2022; 
Rohrer, 2018).

Some researchers use the term “Table 2 fallacy” to describe the 
phenomenon that some authors tend to compare regression coefficients 
from a single model (Westreich & Greenland, 2013). Usually, only one 
coefficient can be interpreted as an effect estimate. Therefore, some 
statisticians even argue that we should only report the results for the 
coefficient of interest to discourage other people from comparing 
coefficients.

If you or the reviewer really want to compare the moderators then one 
would have to provide a rationale for the assumed relationships between 
the moderators. Because of the suppression effects and related 
phenomena, analyzing all predictors simultaneously is rarely a good 
idea, so one must carefully select the predictors.

Let’s assume the following relationship: X --> M --> Y. If we include 
both X and M as predictors in the regression model, one could wrongly 
conclude that X is irrelevant because the regression coefficient will be 
close to zero if we adjust for the mediator M.
Let’s apply it to the meta-analytic context. Sample characteristics X 
(e.g., countries) could determine how the construct is measured across 
studies (M). For example, particular questionnaires cannot be used in 
certain countries because the scale has not been translated yet. Lengthy 
instruments are more likely to be used with student samples, more 
expensive methods are more likey to be used in rich countries etc. Such 
relationships need to be considered when deciding which predictors are 
included in a regression model and which of the resulting regression 
coefficients can be compared.


Sources:

Cinelli, C., Forney, A., & Pearl, J. (2022). A crash course in good and 
bad controls. Sociological Methods and Research. 
https://doi.org/10.1177/00491241221099552

Rohrer, J. M. (2018). Thinking clearly about correlations and causation: 
Graphical causal models for observational data. Advances in Methods and 
Practices in Psychological Science, 1(1), 27–42. 
https://doi.org/10.1177/2515245917745629

Westreich, D., & Greenland, S. (2013). The Table 2 Fallacy: Presenting 
and Interpreting Confounder and Modifier Coefficients. American Journal 
of Epidemiology, 177(4), 292–298. https://doi.org/10.1093/aje/kws412




Best,
Lukasz
-- 
Lukasz Stasielowicz
https://stasielowicz.com/

On 23.05.2025 12:00, r-sig-meta-analysis-request using r-project.org wrote:
> Send R-sig-meta-analysis mailing list submissions to
> 	r-sig-meta-analysis using r-project.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> or, via email, send a message with subject or body 'help' to
> 	r-sig-meta-analysis-request using r-project.org
> 
> You can reach the person managing the list at
> 	r-sig-meta-analysis-owner using r-project.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of R-sig-meta-analysis digest..."
> 
> 
> Today's Topics:
> 
>     1. Re: Seeking advice on multimoderator meta-regression in
>        multilevel meta-analysis (Michael Dewey)
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Thu, 22 May 2025 13:09:40 +0100
> From: Michael Dewey <lists using dewey.myzen.co.uk>
> To: R Special Interest Group for Meta-Analysis
> 	<r-sig-meta-analysis using r-project.org>
> Subject: Re: [R-meta] Seeking advice on multimoderator meta-regression
> 	in multilevel meta-analysis
> Message-ID: <61d6dd0f-d525-4475-95f8-b7cfed2921a2 using dewey.myzen.co.uk>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
> 
> Dear Maximilian
> 
> Comments in-line
> 
> On 22/05/2025 09:52, Maximilian Steininger via R-sig-meta-analysis wrote:
>> Dear all,
>>
>> We conducted a multilevel meta-analysis with random effects specified for individual effect sizes (k = 90) nested within studies (n = 60). We preregistered a series of unimoderator analyses of 4 categorical predictors. Additionally, we conducted exploratory unimoderator analyses with 4 more categorical predictors and 2 continuous predictors – resulting in a total of 10 separate models.
>>
>> In our manuscript, we reported these unimoderator analyses, identified two significant moderators, and subsequently conducted an exploratory moderator analysis using these two significant moderators as predictors.
>> A reviewer suggested we instead include all moderators in a single multimoderator meta-regression model – i.e., using all 10 predictors (8 categorical, 2 continuous).
>>
>> I am open to this suggestion, but have some concerns, and I would be grateful for your insights.
>>
>> Model overview:
>>
>> - 5 categorical predictors with 2 levels
>> - 2 categorical predictors with 3 levels
>> - 1 categorical predictor with 4 levels
>> - 2 continuous (centred) predictors
>>
>> Here is an example of the model specification in R:
>>
>> metaregression = rma.mv(yi ~ cat1 + cat2 + cat3 +cat4 +
>>                                                    cat5 + cat6 + cat7 + cat8 +
>>                                                    con1 + con2,
>> 					         Vmetaregression,
>> 					 	 random = ~ 1 | study_id/es_id,
>> 					         data = all_fx)
>>
>> My concerns are the following:
>>
>> 1) The model requires an estimation of 15 regression parameters. With only 60 studies and 90 effects, this falls below the often mentioned minimum of 10 studies per predictor. I worry this may lead to overfitting and unstable estimates. Would this compromise the stability of the regression coefficients due to increased sampling error?
> 
> I suspect you will see large standard errors for the coefficients in
> your multimoderator analysis.>
>> 2) With 8 categorical moderators, interpretation becomes challenging. If I understand correctly, the model yields conditional effects, i.e., each moderator’s estimate is reported holding all other moderators at their reference level. Is this correct? If so, it seems the coefficients might be difficult to interpret, since they are related to a small hypothetical subset of studies.
>>
> 
> I think that may be a scientific question - are we interested in such
> effects? Interpretation is also difficult if any of the moderators is
> strongly associated with others.
> 
>> 3) Related to 2, we will only have very sparse data across these category combinations, with some of these combinations being non-existent or underrepresented. To what extent can the model handle such sparsity and still provide meaningful estimates?
>>
> 
> I think that is covered by point 2.
> 
>> 4) Do we face power issues given the “moderate” number of effects relative to the number of moderators?
>>
> 
> I am not sure powere is quite the right word here but your estimates
> will lack precision.
> 
> 
>> 5) Could the limited sample size, coupled with the large amount of moderators, increase sensitivity to outlying studies or effect sizes, potentially distorting the results?
>>
> 
> Probably
> 
>> I’m seriously considering the reviewer’s suggestion but want to ensure that any expanded model is both statistically sound and interpretable.
>>
>> Thanks in advance for your time and input - I appreciate any guidance or pointers to references that can help me tackle this issue.
>>
> 
> I think it is worth arguing with the referee unless they have suggested
> a clear scientific question which correspond to the model.
> 
> Michael
> 
> 
>> Best and thanks,
>> Max
>>
>> ——
>>
>> Mag. Maximilian Steininger
>>     PhD candidate
>>
>>     Social, Cognitive, and Affective Neuroscience Unit
>>     Faculty of Psychology
>>     University of Vienna
>>
>>     Liebiggasse 5
>>     1010 Vienna, Austria
>>
>>     e: maximilian.steininger using univie.ac.at
>>     w: http://scan.psy.univie.ac.at
>>
>> _______________________________________________
>> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
>> To manage your subscription to this mailing list, go to:
>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> 


-- 
Diese E-Mail wurde von Avast-Antivirussoftware auf Viren geprüft.
www.avast.com



More information about the R-sig-meta-analysis mailing list