[R-meta] When to skip an extra level?

Timothy MacKenzie |@w|@wt @end|ng |rom gm@||@com
Wed Sep 15 23:46:11 CEST 2021


Dear Wolfgang,

This is very helpful, thank you. In the first linked post
(https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-July/000896.html),
you also say that for a lower level (id) to be added onto a higher
level (outcome), we need many studies that have repeatedly used the
same 'outcome' for a model like: '~ 1 | study/outcome/id'.

To tie that back to the second post, to consider 'study' as a level
(lower level), we need to make sure there is repetition in 'paper'
(higher level).

Thank you,
Tim

On Wed, Sep 15, 2021 at 1:06 PM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>
> Dear Tim,
>
> The question generally is when it makes sense to leave out a level if the data could be regarded as having a hierarchical structure (which is modeled in terms of nested random effects along the lines of '~ 1 | var1/var2/var3/...') and if so, which level(s) to leave out.
>
> I don't think there is any general consensus on this or even much empirical evidence to back up any particular approach. However, in general, I would say that if the number of units at a particular level is very similar to the number of units at one level below it (e.g., there are 199 papers and 200 studies - so one paper describes two studies while the remaining 198 papers describe one study  -- to make the example from the second link even more extreme), then it becomes very difficult to distinguish the variances at those two levels and I would consider dropping one of the two levels. I don't have any super strong feelings on whether to then drop the upper (paper) or lower (study) level -- in the extreme scenario above, it is unlikely to matter. Dropping the paper level would treat the two studies from that one paper as independent. Dropping the study level would assume that the average true effects (averaged over whatever lower levels there are in the hierarchy below 'studies') in those two studies from that one paper are homogeneous. Neither is (probably) correct.
>
> I cannot tell you where the exact point is (in terms of # of papers versus # of studies) where I would start to consider dropping a level.
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
> >Behalf Of Timothy MacKenzie
> >Sent: Wednesday, 15 September, 2021 2:31
> >To: R meta
> >Subject: [R-meta] When to skip an extra level?
> >
> >Dear Meta-analysis Community Members,
> >
> >I want to get some clarity regarding when not to add an additional
> >level. I have found two posts and was wondering how they agree with
> >one another? (It seems the first one says is at odds with the second
> >one)
> >
> >***This post: https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-
> >July/000896.html
> >suggests that we should avoid adding an extra level (row id) in:
> >
> >random = ~ 1 | study/outcome/id
> >
> >if not so many "studies" have repeatedly used the same "outcome".
> >
> >***This post: https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2019-
> >March/001479.html
> >(second message from the top) suggests that we should avoid adding an
> >extra level (study_id) in:
> >
> >random = ~ 1 | paper_id/study_id/row_id
> >
> >Arguing that "One can probably skip a level if the number of units at
> >a particular level is not much higher than the number of units at the
> >next level (the two variance components are then hard to distinguish).
> >So, for example, 200 "studies" in 180 "papers" is quite similar, so
> >one could probably *leave out the studies* level and only add random
> >effects for papers (the two variance components are then hard to
> >distinguish)."
> >
> >Sincerely,
> >Tim



More information about the R-sig-meta-analysis mailing list