[R-meta] Guidance regarding balance in fixed- and random-effects

Luke Martinez m@rt|nez|ukerm @end|ng |rom gm@||@com
Sat Oct 16 03:44:38 CEST 2021


Hi Wolfgang,

Sure, that makes perfect sense. Thank you very much.

Luke

On Fri, Oct 15, 2021 at 8:16 AM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>
> >-----Original Message-----
> >From: Luke Martinez [mailto:martinezlukerm using gmail.com]
> >Sent: Thursday, 14 October, 2021 17:22
> >To: Viechtbauer, Wolfgang (SP)
> >Cc: R meta
> >Subject: Re: [R-meta] Guidance regarding balance in fixed- and random-effects
> >
> >I guess I mixed up "overfit" (not making a difference) with
> >"overparameterized" (beyond data's capacity). I meant to say that,
> >doesn't "lab" and "outcome" seem to be an overfit in model (2)? And if
> >so, shouldn't that be a sign that I shouldn't add "lab" and "outcome"
> >in model (1)?
>
> We are going a bit in circles here. If the lab and outcome variances are estimated to be zero, then that's the same as dropping those random effects. So whether you drop them or not - same thing. If they are not zero, then based on what you wrote, you wouldn't drop them. So why not just use the model with those random effects always included? The 'dropping' happens automatically if a variance component is estimated to be 0.
>
> >My ultimate goal is to better understand if it is acceptable to fit
> >different models (with different fixed- and random-effects) to answer
> >an initial RQ (i.e., overall effect of X) versus subsequent RQs (i.e.,
> >what is the effect of mod1 and mod2 on X) or it is wiser for them to
> >be in harmony?
> >
> >I ask this because I think some meta-analysts advocate for "focused"
> >models where each model is specifically specified (with different
> >fixed- and random-effects) to answer a specific RQ.
> >
> >Others advocate for "succession in modeling", that is, they start with
> >an empty model, and then add moderators to measure moderators' effects
> >(i.e., what I inquired about in this post).
>
> I can only tell you what I would tend to do. Generally, I would try to stick to one random effects structure to the extent possible, since that structure is meant to account for the underlying sources of heterogeneity and dependencies in the effects (it also makes the Methods section a whole lot easier to write ...). But this all depends on the types of research questions I am trying to answer -- for some questions, I might need to shift to a different data structure or include different parts of the data in the analysis, so I don't think there is one sweeping answer to this.
>
> Best,
> Wolfgang



More information about the R-sig-meta-analysis mailing list