[R-meta] random part in meta-regression vs. that in multilevel models

Jack Solomon kj@j@o|omon @end|ng |rom gm@||@com
Thu Mar 18 17:46:57 CET 2021


Dear Wolfgang,

Thank you so much for your prompt response. Please let me rephrase my first
question.

**First, given that intercept has been dropped from the fixed part, what
does `~1` (denoting varying intercepts) in `random= ~1 | id/outcome` in the
random part estimates in the below model?

In other words, if we dropped the intercept (so that no reference category
for `outcome` exists), then how can we allow (by using `random= ~1 |
id/outcome`) a dropped intercept to vary across different levels of `id`
and `id.outcome`, respectively?

`metafor::rma.mv(es ~ 0+outcome, V, random= ~1|id/outcome, data = data)`

Thank you,
Jack


On Thu, Mar 18, 2021 at 6:16 AM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:

> Dear Jack,
>
> See below for my responses.
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: R-sig-meta-analysis [mailto:
> r-sig-meta-analysis-bounces using r-project.org] On
> >Behalf Of Jack Solomon
> >Sent: Wednesday, 17 March, 2021 22:01
> >To: r-sig-meta-analysis using r-project.org
> >Subject: [R-meta] random part in meta-regression vs. that in multilevel
> models
> >
> >Hello List Members,
> >
> >**First, I have always thought it is illegitimate to add random-effects
> for
> >something that has not been estimated in the fixed part of the model.  For
> >example:
> >
> >`lme4::lmer(math ~ female*minority + (ses | sch.id), data = data)` is
> >illegitimate because `ses` has not been estimated in the fixed part.
>
> It's perfectly fine to do this if one is willing to assume that the mean
> slope of ses is 0. Often, this is not an appropriate assumption but I would
> not say this is illegitimate.
>
> This reminds me of the common trope that one should never add an
> interaction to a model without also adding the corresponding main effects.
> One can come up with perfectly valid arguments where this is not necessary
> under certain circumstances. For example, suppose I have conducted a
> randomized study where I measured people in two groups twice, once pre and
> once post treatment. Assume the data are in this format:
>
> person group post y
> 1      T     0    .
> 1      T     1    .
> 2      C     0    .
> 2      C     1    .
> ...
>
> I assume the meaning of these variables is self-evident. Then
>
> y ~ post + post:group
>
> is a perfectly valid model as far as I am concerned. It assumes that there
> is no pre-treatment group difference (since the model does not include a
> group 'main effect'), but for a randomized study, any pre-treatment group
> difference would be due to chance anyway, so why estimate a pre-treatment
> group difference that must be in reality, by definition, 0?
>
> So this is a model that includes the post:group interaction, but not all
> corresponding main effects. Is this wrong? Not to me at least. I might not
> use this model for other reasons - for example, to avoid discussions with
> reviewers who will claim that one MUST ALWAYS include all main effects
> corresponding to an interaction <sigh> - but that's a different issue.
>
> >But I frequently see multilevel meta-regression models where intercept is
> >dropped (~0+...) from the fixed part but at the same time it is added to
> >the random part.  For example:
> >
> >metafor::rma.mv(es ~ 0+outcome, V, random= ~1|id/outcome, data = data)
>
> Assuming 'outcome' is a factor/character variable, es ~ 0+outcome is just
> a reparameterization of es ~ outcome and ultimately those are two identical
> models.
>
> Coincidentally, since you brought this up: I run roughly weekly live
> streams where I discuss R and stats and the session tonight will cover this
> in detail (not in the context of meta-analysis or multilevel modeling, but
> the same idea applies). If you are interested, see:
>
> https://www.wvbauer.com/doku.php/live_streams
>
> These live streams are completely free, no registration required, just
> click on the link and start watching at 5pm CET.
>
> >>>>>>> So, why is this ok in meta-regression?
>
> Yes, in this particular case for sure.
>
> >**Second, I have always thought that `outcome` is treated as a categorical
> >predictor and thus appearing only to the **left** of `|`.  For example:
> >
> >lme4::lmer(es ~0+outcome + (0 + outcome | id), data = data)
> >
> >But I frequently see multilevel meta-regression models where outcome is
> >treated as a categorical predictor AND a **grouping variable** thus
> >appearing only to the **right** of `|`.  For example:
> >
> >metafor::rma.mv(es ~ 0+outcome, V, random= ~1|id/outcome, data = data)
> >
> >>>>>>> So, why is this ok in meta-regression?
>
> See: https://www.metafor-project.org/doku.php/analyses:konstantopoulos2011
> which discusses how these two may be completely analogous parameterizations
> of the same model, if a certain var-cov structure is assumed for 'outcome |
> id' (that's not the case above for lmer(), since it automatically will use
> an unstructured var-cov matrix for 'outcome | id', but rma.mv() or
> nlme::lme() allow specification of different structures, struct="CS" being
> the default in for the former).
>
> I don't know how many times I have posted the konstantopoulos2011 link
> above on this mailing list, but I can highly recommend it for anybody who
> is doing a multilevel meta-analysis (I got curious; a google search with
> 'site:https://stat.ethz.ch/pipermail/r-sig-meta-analysis/
> analyses:konstantopoulos2011' suggests 101 times ... plus a few more times
> today!).
>
> Anyway, I hope this helps to clarify some things.
>
> >Many thanks for your support,
> >Jack
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list