[R-meta] Adjusted R^2 for rma.mv?
Viechtbauer, Wolfgang (NP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Apr 28 12:06:52 CEST 2025
Hi Frank,
Interestingly, the proportional reduction in residual variance in an OLS regression model is actually equal to the *adjusted* R^2:
dat <- mtcars
res1 <- lm(mpg ~ hp + wt, data=dat)
res0 <- lm(mpg ~ 1, data=dat)
summary(res1)$r.squared
summary(res1)$adj.r.squared
(sigma(res0)^2 - sigma(res1)^2) / sigma(res0)^2
library(metafor)
rma(mpg ~ hp + wt, vi=0, data=dat)$R2 / 100
But as you say, with a large sample size, this matters little.
Maybe you could look into using some kind of regularization (e.g., ridge, lasso). The pema package (https://cran.r-project.org/package=pema) might be worth checking out.
The rma.mv() function (if you install the devel version) allows you to pass two arguments via ... called lambda1 and lambda2. These are L1 and L2 penalty terms for the regression coefficients (so lambda1 > 0 corresponds to lasso, lambda2 > 0 corresponds to ridge, and lambda1 > 0 & lambda2 > 0 corresponds to elastic net regularization). The higher the penalty terms, the stronger the shrinkage. This should tend to increase sum(model1$sigma2) and hence penalized R^2 values.
Choosing appropriate values for lambda1/lambda2 could be done with cross-validation, given that you have thousands of studies.
Best,
Wolfgang
> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On Behalf
> Of Frank Bosco via R-sig-meta-analysis
> Sent: Wednesday, April 23, 2025 18:31
> To: r-sig-meta-analysis using r-project.org
> Cc: Frank Bosco <meta using frankbosco.com>
> Subject: [R-meta] Adjusted R^2 for rma.mv?
>
> Hi all,
>
> I am running a variety of multilevel meta-regressions using rma.mv. I am
> estimating pseudo R^2 using the following: (sum(model0$sigma2) -
> sum(model1$sigma2)) / sum(model0$sigma2) -- where model0 is an
> intercept-only model and model1 contains moderators.
>
> I would like to estimate an adjusted R^2 that will penalize the addition
> of moderators. However, the number of studies I am summarizing is large
> (in the thousands). Thus, using the standard formula for adjusted R^2,
> adding 20 or so predictors to the model reduces the R^2 by only decimal
> dust.
>
> Are there any suggestions on how to arrive at something like an adjusted
> R^2 in this context?
>
> Although other fit indices might be preferred, I would like to stay
> within the familiar R^2 metric for ease of interpretation. (Though, I am
> curious as to which other fit indices would be preferred.)
>
> Thanks,
>
> Frank
>
> *Frank Bosco, Ph.D.*
> Director, metaBUS.org
> Professor
> School of Business
> Department of Management & Entrepreneurship
>
> Virginia Commonwealth University
> Snead Hall
> 301 West Main Street, Room B4151
> Richmond, Virginia 23284
> Fax: 804 828-1602
> business.vcu.edu <http://business.vcu.edu>
More information about the R-sig-meta-analysis
mailing list