[R-meta] pseudo r2 values for rma.mv models

Gabriele Midolo g@br|e|e@m|do|o @end|ng |rom gm@||@com
Tue Sep 27 10:51:36 CEST 2022


Dear all,

My colleagues and I would like to report pseudo-R2 values in the results of
a meta-analysis. We are wondering 1) if our approach to calculate R2 for
meta-analytical models is overall correct, and 2) if it makes sense to
report values of such R2 values in the results of a paper, or whether it is
better to report just the omnibus test (the Qm statistics) to estimate the
importance of various moderators explored in different meta-regression
models.

We are currently estimating R2 values using the following function, which
estimate two pseudo-r2 values based on sigma2 and log-likelihood values ( =
McFadden’s R2):

r2_rma.mv <- function(model, null.model) {

  r2.sigma <- (sum(null.model$sigma2) - sum(model$sigma2)) /
sum(null.model$sigma2)

  r2.loglik <- 1 - (logLik(model) / logLik(null.model))

  return(cbind(r2.sigma,r2.loglik))

}

We are working with multilevel meta-analytical models fitted as follow:

res <- rma.mv( yi, vi, data=subset, random=list(~ 1 | Site_ID / ID , ~ Year
| Site_ID ), struct="CAR", mods=~ SOM, method="ML")

Where SOM is a predictor in the meta-regression. The model `res` is then
compared to a null model with the same structure but without predictors
(mods=~1) to calculate the pseudo R2 values.

However, we are noticing a few weird results. First, `r2.sigma` tends to
provide quite unrealistic values compared to r2 based on log-likelihood.
Second, sometimes we found quite high values of r2 values in models where
the moderator has very low predictive power instead ( = non-significance of
the slope estimate and non-significant Qm values in the omnibus test for
moderators).

Thanks for any elucidation.

Best regards,
Gabriele.

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list