[R-meta] Guidance regarding balance in fixed- and random-effects

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Thu Oct 14 16:32:09 CEST 2021


Overparameterized means that the parameters are not uniquely estimable. That is not what is happening here (presumably -- I don't know the details of your data). The peak of the likelihood surface just happens to be at 0 for those variance components. If you drop the corresponding random effects from your model, then you will get exactly the same results, so why bother?

Best,
Wolfgang 

>-----Original Message-----
>From: Luke Martinez [mailto:martinezlukerm using gmail.com]
>Sent: Thursday, 14 October, 2021 16:09
>To: Viechtbauer, Wolfgang (SP)
>Cc: R meta
>Subject: Re: [R-meta] Guidance regarding balance in fixed- and random-effects
>
>Thank you Wolfgang.
>
>My concern was that wouldn't the subsequent models like (2) shown
>below where moderators return ZERO variance for some of the initially
>non-ZERO levels in (1) be overparameterized?
>
>That is, IF:
>
>(1) rma.mv(yi, vi, random = ~ 1 | lab / study / outcome / time /
>rowID) ==> All levels give non-ZERO variance
>
>(2) rma.mv(yi ~ mod1*mod2, vi, random = ~ 1 | lab / study / outcome /
>time / rowID) ==> Now, "lab" & "outcome" give ZERO variance
>
>THEN: is (2) overparameterized?
>
>IF yes, THEN reparameterize (1) to make its random part match (2):
>
>(11) rma.mv(yi, vi, random = ~ 1 | study / time / rowID) ==> All
>levels give non-ZERO variance
>
>(22) rma.mv(yi ~ mod1*mod2, vi, random = ~ 1 | study / time / rowID)
>==> All levels give non-ZERO variance
>
>On Thu, Oct 14, 2021 at 7:21 AM Viechtbauer, Wolfgang (SP)
><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>>
>> If the estimate of sigma^2 for lab is zero, then
>>
>> rma.mv(yi, vi, random = ~ 1 | lab / study / outcome / time / rowID)
>>
>> and
>>
>> rma.mv(yi ~ mod1*mod2, vi, random = ~ 1 | study / time / rowID)
>>
>> are identical. So I would not bother to manually drop the lab random effect,
>since in essence this happens automatically.
>>
>> In general, I would use (1) for all analyses since this is the a priori chosen
>model that is meant to reflect the dependencies and sources of heterogeneity you
>think may be relevant. If some components end up being 0 in some models, then so
>be it, but I would stick to one model for all analyses.
>>
>> Best,
>> Wolfgang
>>
>> >-----Original Message-----
>> >From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On
>> >Behalf Of Luke Martinez
>> >Sent: Wednesday, 13 October, 2021 23:26
>> >To: R meta
>> >Subject: [R-meta] Guidance regarding balance in fixed- and random-effects
>> >
>> >Dear Experts,
>> >
>> >Forgive my modeling question. But in answering RQs like: what is the
>> >overall effect of X, I often fit an intercept-only model with several
>> >nested levels, like:
>> >
>> >(1) rma.mv(yi, vi, random = ~ 1 | lab / study / outcome / time / rowID)
>> >
>> >In the above model, all levels reveal heterogeneity in them.
>> >
>> >But then in answering other RQs, when I add a couple of moderators,
>> >some of the levels (e.g., "outcome" AND "lab") return ZERO
>> >heterogeneity making me fit a simpler model, like:
>> >
>> >(2) rma.mv(yi ~ mod1*mod2, vi, random = ~ 1 | study / time / rowID)
>> >
>> >Question: When this happens, does this mean that I should go back and
>> >refit model (1) without "outcome" AND "lab" to uniform the random
>> >specification of model (1) and model (2)?
>> >
>> >OR, model (1) is appropriate for RQ1 and model (2) is appropriate for RQ2s?
>> >
>> >Thank you for your perspectives,
>> >Luke


More information about the R-sig-meta-analysis mailing list