[R-meta] Questions regarding REML and FE models and R^2 calculation in metafor
Nevo Sagi
nevo@@g|8 @end|ng |rom gm@||@com
Tue Jul 25 22:38:09 CEST 2023
One important point that I failed to mention before is that the
(continuous) moderators in question highly vary between studies, much more
than within studies.
When using the by-study random effect, I think that much of the variation
in effect size that may be explained by these moderators is accounted for
by the random effect instead.
Does that make sense?
בתאריך יום ג׳, 25 ביולי 2023, 21:37, מאת James Pustejovsky <
jepusto using gmail.com>:
> Hi Nevo,
>
> Responses inline below.
>
> Kind Regards,
> James
>
> On Tue, Jul 25, 2023 at 1:37 AM Nevo Sagi <nevosagi8 using gmail.com> wrote:
>
>> I don't understand the rationale of using random effects at the
>> experiment level. Experiments in my meta-analysis are parallel to
>> observations in a conventional statistical analysis.
>>
>
> I think this analogy doesn't follow. Conventional statistical analysis
> does have observation-level error terms (i.e., level-1 error)--it's just
> included by default as part of the model. In meta-analytic models, these
> errors are not included unless explicitly specified.
>
>
>> What is the meaning of using random effects at the observation level?
>>
>
> Observation-level random effects here are used to capture heterogeneity of
> effects across the experiments nested within a study. Considering that
> you're interested in looking at moderators that vary across the experiments
> reported in the same reference, it seems useful to attend to heterogeneity
> at this level as well.
>
>
>> In my understanding, by using random effects at the Reference level, I
>> already tell the model to look at within-reference variation.
>>
>
> This is not correct. Including reference-level random effects captures
> _between-reference_ variation (or heterogeneity) of effects.
>
>
>> In fact, the reason I was thinking to omit the random effect is because
>> the model was over-sensitive to variation in effect size across moderator
>> levels within specific references, while I am more interested in the total
>> variation across the whole moderator spectrum, and therefore I want to
>> focus more on the between-reference variation.
>> Does that make sense?
>>
>
> I stand by my original recommendation to consider including
> experiment-level heterogeneity here. Omitting the experiment-level
> heterogeneity more-or-less corresponds to averaging the effect size
> estimates together so that you have one effect per reference, which will
> tend to conceal within-reference heterogeneity. In fact, if you are using a
> model that does not include moderators / predictors that vary at the
> experiment level (within reference), then the correspondence is exact.
> Further details here: https://osf.io/preprints/metaarxiv/pw54r/
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list