[R-meta] pseudo r2 values for rma.mv models

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue Sep 27 11:06:44 CEST 2022

```Dear Gabriele,

One can compute a pseudo R^2 value in various ways. One is based on the proportional reduction in the (sum of the) variance components, McFadden's R^2 is another way (and there are more). In either way, the value may be inaccurate. I just happened to have written up a little tutorial on how to use bootstrapping to construct a CI for R^2:

https://www.metafor-project.org/doku.php/tips:ci_for_r2

At the bottom is a link to illustrate how to do this for a more complex model. You might want to try this out to see how wide the CI is.

That R^2 can be high even when the moderator is not significant can be partly explained by this as well. Also, if the total amount of heterogeneity, sum(null.model\$sigma2), is low to begin with, then sum(model\$sigma2) could easily become (close to) zero by chance alone even if a moderator is not significant, at which point R^2 would be 100% (or close to it)! Again, a CI would (hopefully) reveal that the R^2 value should not be trusted too much in such a case.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Gabriele Midolo
>Sent: Tuesday, 27 September, 2022 10:52
>To: R meta
>Subject: [R-meta] pseudo r2 values for rma.mv models
>
>Dear all,
>
>My colleagues and I would like to report pseudo-R2 values in the results of
>a meta-analysis. We are wondering 1) if our approach to calculate R2 for
>meta-analytical models is overall correct, and 2) if it makes sense to
>report values of such R2 values in the results of a paper, or whether it is
>better to report just the omnibus test (the Qm statistics) to estimate the
>importance of various moderators explored in different meta-regression
>models.
>
>We are currently estimating R2 values using the following function, which
>estimate two pseudo-r2 values based on sigma2 and log-likelihood values ( =
>
>r2_rma.mv <- function(model, null.model) {
>
>  r2.sigma <- (sum(null.model\$sigma2) - sum(model\$sigma2)) /
>sum(null.model\$sigma2)
>
>  r2.loglik <- 1 - (logLik(model) / logLik(null.model))
>
>  return(cbind(r2.sigma,r2.loglik))
>
>}
>
>We are working with multilevel meta-analytical models fitted as follow:
>
>res <- rma.mv( yi, vi, data=subset, random=list(~ 1 | Site_ID / ID , ~ Year
>| Site_ID ), struct="CAR", mods=~ SOM, method="ML")
>
>Where SOM is a predictor in the meta-regression. The model `res` is then
>compared to a null model with the same structure but without predictors
>(mods=~1) to calculate the pseudo R2 values.
>
>However, we are noticing a few weird results. First, `r2.sigma` tends to
>provide quite unrealistic values compared to r2 based on log-likelihood.
>Second, sometimes we found quite high values of r2 values in models where
>the moderator has very low predictive power instead ( = non-significance of
>the slope estimate and non-significant Qm values in the omnibus test for
>moderators).
>
>Thanks for any elucidation.
>
>Best regards,
>Gabriele.
```