[R-meta] Questions regarding REML and FE models and R^2 calculation in metafor

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Mon Jul 24 21:48:06 CEST 2023


Hi Nevo,

Considering the structure of your data (50 references with an average of 10
experiments per  reference), I would suggest moving to a more flexible
model that includes random effects not only at the level of reference, but
also at the level of experiment, as in:
random = ~ 1 | Reference / Experiment
Using this random effects structure will then let you describe how the
moderator explains variation both between references and within references
(i.e., by comparing the variance components from a model with moderators to
the variance components from a model with an intercept alone).

It could also be useful to center the moderators by reference (i.e.,
calculate the reference-specific mean of the moderator and then subtract
this from the original values of the moderator). Centering is akin to
de-composing the predictor into within-reference and between-reference
variation. The within-reference variation would come only from those 7
studies where the value of the moderator changes across experiments. The
between-reference variation would come from all 50 studies if different
articles use different levels of the moderator. The model for a moderator X
would then be:
modes = ~ X_mean + X_centered
I would anticipate that the coefficients on these predictors would be less
sensitive to the random effects specification than using the un-centered
predictor X.

James


On Mon, Jul 24, 2023 at 6:24 AM Nevo Sagi via R-sig-meta-analysis <
r-sig-meta-analysis using r-project.org> wrote:

> Dear list members, I have a follow-up question.
>
> In my dataset I have about 500 experiments (i.e., observations) across 50
> articles (i.e., references), but the moderators in question change across
> observations only within 7 of the references. Consequently, my rma.mv
> model
> that uses ~1|Reference as a random effect is over-sensitive to the data
> from these 7 studies compared to the others.
> In such a case, if I use a rma.mv (or rma.uni) model without a random
> effect, would it be more reliable?
> And if I do use such a model, how do I compute the R^2 for each moderator
> (as sigma^2 is inapplicable)?
>
> Thanks again,
> Nevo Sagi
>
> On Mon, Jun 5, 2023 at 10:52 AM Nevo Sagi <nevosagi8 using gmail.com> wrote:
>
> > Dear Wolgang,
> >
> > Thank you for your feedback.
> >
> > It turns out that I misplaced the equation terms when calculating the
> > pseudo-R^2.
> >
> > All the best,
> > Nevo
> >
> > On Thu, Jun 1, 2023 at 3:30 PM Viechtbauer, Wolfgang (NP) <
> > wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> >
> >> Dear Nevo,
> >>
> >> Please see my responses below.
> >>
> >> Best,
> >> Wolfgang
> >>
> >> >-----Original Message-----
> >> >From: R-sig-meta-analysis [mailto:
> >> r-sig-meta-analysis-bounces using r-project.org] On
> >> >Behalf Of Nevo Sagi via R-sig-meta-analysis
> >> >Sent: Thursday, 04 May, 2023 11:09
> >> >To: r-sig-meta-analysis using r-project.org
> >> >Cc: Nevo Sagi
> >> >Subject: [R-meta] Questions regarding REML and FE models and R^2
> >> calculation in
> >> >metafor
> >> >
> >> >Dear list members,
> >> >
> >> >I conducted a meta-analysis on the role of climate in mediating a
> >> specific
> >> >ecological process, using the *metafor *package in R.
> >> >This is actually a meta-regression, using the rma.mv function, with
> >> >*temperature *and *precipitation *as moderators, along with some other
> >> >moderators related to experimental design. I also use reference as a
> >> random
> >> >effect ('random = ~1|*Reference'*), as some references include more
> than
> >> >one experiment.
> >> >
> >> >*1. FE vs REML model:*
> >> >After reading Wolfgang Viechtbauer's blog post
> >> ><https://wviechtb.github.io/metafor/reference/misc-models.html> on the
> >> >differences between fixed-effects and random-effects models in the
> >> >*metafor *package, I decided to use the FE method, because the studies
> I
> >> >gathered are not a random sample of the population of hypothetical
> >> studies.
> >> >Instead, the sample is biased by underrepresentation of some climates
> and
> >> >overrepresentation of others.
> >> >I wonder whether my interpretation of the difference between FE and
> REML
> >> >models is correct, and would like to get some feedback on it.
> >>
> >> I don't think this is really a good reason for using a FE model, because
> >> the underrepresentation of some climates and overrepresentation of
> others
> >> will affect your results either way. The bigger question is if climate
> is
> >> an important moderator, which you can examine via meta-regression.
> >>
> >> >*2. R^2 calculation:*
> >> >Reviewers of my manuscript required that I provide R-squared values for
> >> >each of the climate moderators.
> >> >Using the *metafor *package, only rma.uni models (where random
> variables
> >> >cannot be specified) provide R^2 estimation.
> >> >In a previous conversation in this mailing list, Wolfgang indicated
> that
> >> >pseudo-R^2 can be calculated based on the variance (sigma2) reported by
> >> >models including and excluding the moderator in question:
> >> >*(res0$sigma2 - res1$sigma2) / res0$sigma2*
> >> >*where 'res0' is the model without coefficients and 'res1' the model
> >> with.*
> >> >
> >> >I have two problems with this solution:
> >> >1. FE models do not provide variance components (sigma2). Therefore,
> the
> >> >pseudo R-squared can be calculated only for REML models. I guess this
> can
> >> >be explained by the nature of the models, which I don't fully
> understand.
> >>
> >> Yes, this approach to calculating such pseudo-R^2 values only works in
> RE
> >> models.
> >>
> >> >2. When using REML models and performing the above calculation, I get
> >> weird
> >> >results. For example, one of the pseudo R^2 values was above 1. This
> >> cannot
> >> >mean that the moderator explained more than 100% of the variance in the
> >> >effect size. How comparable is this pseudo R^2 for the standard R^2 of
> >> >simpler models?
> >>
> >> This is mathematically impossible. (res0$sigma2 - res1$sigma2) /
> >> res0$sigma2 is the same as 1 - res1$sigma2 / res0$sigma2 and the second
> >> term cannot be negative, so the resulting value cannot be larger than 1.
> >>
> >> >To conclude, I will be glad to get feedback on both problems:
> >> >1. Should I use a random-effect or fixed-effect model?
> >> >2. How do I get a reliable R^2 or an alternative measure of goodness of
> >> fit
> >> >for single-moderator models that include a random structure and a
> >> sampling
> >> >variance?
> >> >
> >> >Thank you very much,
> >> >
> >> >Nevo Sagi
> >> >
> >> >--
> >> >Dr. Nevo Sagi
> >> >
> >> >Prof. Dror Hawlena's Risk-Management Ecology Lab
> >> >Department of Ecology, Evolution & Behavior
> >> >The Alexander Silberman Institute of Life Sciences
> >> >The Hebrew University of Jerusalem
> >> >Edmond J. Safra Campus at Givat Ram, Jerusalem 9190401, Israel.
> >>
> >
> >
> > --
> > Dr. Nevo Sagi
> >
> > Prof. Dror Hawlena's Risk-Management Ecology Lab
> > Department of Ecology, Evolution & Behavior
> > The Alexander Silberman Institute of Life Sciences
> > The Hebrew University of Jerusalem
> > Edmond J. Safra Campus at Givat Ram, Jerusalem 9190401, Israel.
> >
>
>
> --
> Dr. Nevo Sagi
>
> Prof. Dror Hawlena's Risk-Management Ecology Lab
> Department of Ecology, Evolution & Behavior
> The Alexander Silberman Institute of Life Sciences
> The Hebrew University of Jerusalem
> Edmond J. Safra Campus at Givat Ram, Jerusalem 9190401, Israel.
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list