[R-meta] Guidance regarding balance in fixed- and random-effects

Luke Martinez m@rt|nez|ukerm @end|ng |rom gm@||@com
Thu Oct 14 17:21:31 CEST 2021


I guess I mixed up "overfit" (not making a difference) with
"overparameterized" (beyond data's capacity). I meant to say that,
doesn't "lab" and "outcome" seem to be an overfit in model (2)? And if
so, shouldn't that be a sign that I shouldn't add "lab" and "outcome"
in model (1)?

My ultimate goal is to better understand if it is acceptable to fit
different models (with different fixed- and random-effects) to answer
an initial RQ (i.e., overall effect of X) versus subsequent RQs (i.e.,
what is the effect of mod1 and mod2 on X) or it is wiser for them to
be in harmony?

I ask this because I think some meta-analysts advocate for "focused"
models where each model is specifically specified (with different
fixed- and random-effects) to answer a specific RQ.

Others advocate for "succession in modeling", that is, they start with
an empty model, and then add moderators to measure moderators' effects
(i.e., what I inquired about in this post).

Thank you,
Luke

On Thu, Oct 14, 2021 at 9:32 AM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>
> Overparameterized means that the parameters are not uniquely estimable. That is not what is happening here (presumably -- I don't know the details of your data). The peak of the likelihood surface just happens to be at 0 for those variance components. If you drop the corresponding random effects from your model, then you will get exactly the same results, so why bother?
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: Luke Martinez [mailto:martinezlukerm using gmail.com]
> >Sent: Thursday, 14 October, 2021 16:09
> >To: Viechtbauer, Wolfgang (SP)
> >Cc: R meta
> >Subject: Re: [R-meta] Guidance regarding balance in fixed- and random-effects
> >
> >Thank you Wolfgang.
> >
> >My concern was that wouldn't the subsequent models like (2) shown
> >below where moderators return ZERO variance for some of the initially
> >non-ZERO levels in (1) be overparameterized?
> >
> >That is, IF:
> >
> >(1) rma.mv(yi, vi, random = ~ 1 | lab / study / outcome / time /
> >rowID) ==> All levels give non-ZERO variance
> >
> >(2) rma.mv(yi ~ mod1*mod2, vi, random = ~ 1 | lab / study / outcome /
> >time / rowID) ==> Now, "lab" & "outcome" give ZERO variance
> >
> >THEN: is (2) overparameterized?
> >
> >IF yes, THEN reparameterize (1) to make its random part match (2):
> >
> >(11) rma.mv(yi, vi, random = ~ 1 | study / time / rowID) ==> All
> >levels give non-ZERO variance
> >
> >(22) rma.mv(yi ~ mod1*mod2, vi, random = ~ 1 | study / time / rowID)
> >==> All levels give non-ZERO variance
> >
> >On Thu, Oct 14, 2021 at 7:21 AM Viechtbauer, Wolfgang (SP)
> ><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> >>
> >> If the estimate of sigma^2 for lab is zero, then
> >>
> >> rma.mv(yi, vi, random = ~ 1 | lab / study / outcome / time / rowID)
> >>
> >> and
> >>
> >> rma.mv(yi ~ mod1*mod2, vi, random = ~ 1 | study / time / rowID)
> >>
> >> are identical. So I would not bother to manually drop the lab random effect,
> >since in essence this happens automatically.
> >>
> >> In general, I would use (1) for all analyses since this is the a priori chosen
> >model that is meant to reflect the dependencies and sources of heterogeneity you
> >think may be relevant. If some components end up being 0 in some models, then so
> >be it, but I would stick to one model for all analyses.
> >>
> >> Best,
> >> Wolfgang
> >>
> >> >-----Original Message-----
> >> >From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
> >On
> >> >Behalf Of Luke Martinez
> >> >Sent: Wednesday, 13 October, 2021 23:26
> >> >To: R meta
> >> >Subject: [R-meta] Guidance regarding balance in fixed- and random-effects
> >> >
> >> >Dear Experts,
> >> >
> >> >Forgive my modeling question. But in answering RQs like: what is the
> >> >overall effect of X, I often fit an intercept-only model with several
> >> >nested levels, like:
> >> >
> >> >(1) rma.mv(yi, vi, random = ~ 1 | lab / study / outcome / time / rowID)
> >> >
> >> >In the above model, all levels reveal heterogeneity in them.
> >> >
> >> >But then in answering other RQs, when I add a couple of moderators,
> >> >some of the levels (e.g., "outcome" AND "lab") return ZERO
> >> >heterogeneity making me fit a simpler model, like:
> >> >
> >> >(2) rma.mv(yi ~ mod1*mod2, vi, random = ~ 1 | study / time / rowID)
> >> >
> >> >Question: When this happens, does this mean that I should go back and
> >> >refit model (1) without "outcome" AND "lab" to uniform the random
> >> >specification of model (1) and model (2)?
> >> >
> >> >OR, model (1) is appropriate for RQ1 and model (2) is appropriate for RQ2s?
> >> >
> >> >Thank you for your perspectives,
> >> >Luke



More information about the R-sig-meta-analysis mailing list