[R-meta] rma.mv: why some var components change but others don't across 2 models

Stefanou Revesz @te|@noureve@z @end|ng |rom gm@||@com
Mon Nov 1 17:20:07 CET 2021


Thanks! Feel free to ignore this, but I don't think it has come up on
the mailing list before.

If I use: list(~ 1 | study, ~1|outcome, ~ 1 | measure), then
everything else aside, it means I believe that there are inherent
differences in 'outcome' that would necessitate disentangling
'outcome' effects from those of study and measure (crossing outcome
with study and measure).

On the other hand, I can use list(~ outcome | study, ~ 1 | measure),
struct="UN" which again adheres to the belief that there are inherent
differences in 'outcome' without necessitating disentangling 'outcome'
effects from those of study and measure (outcome nested in study).

What's the difference between the two strategies above, and why I
never see: list(~ 1 | study, ~1|outcome) in the archives (all I see is
either '~1|study/outcome' or its multivariate reparametrization '~
outcome | study'?

Stefanou

On Mon, Nov 1, 2021 at 6:09 AM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>
> Sounds right.
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: Stefanou Revesz [mailto:stefanourevesz using gmail.com]
> >Sent: Saturday, 30 October, 2021 21:10
> >To: Viechtbauer, Wolfgang (SP)
> >Cc: R meta
> >Subject: Re: rma.mv: why some var components change but others don't across 2
> >models
> >
> >Oops. I was referring to your linked post:
> >https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-July/000896.html
> >
> >study  outcome  measure study.outcome.measure
> >1        A              1               1.A.1
> >1        B              1               1.B.1
> >2        A              1               2.A.1
> >3        A              2               3.A.2
> >3        B             1                3.B.1
> >3        C             2                3.C.2
> >4        B             1                4.B.1
> >
> >list(~ 1 | study, ~1|outcome, ~ 1 | measure) would mean that rows that
> >share a study, share an outcome, and share a measure, separately can
> >get their own similar random effects.
> >
> >list(~ 1 | study/outcome, ~ 1 | measure) would mean that rows that
> >share a study, and then within each study, rows that share an outcome,
> >can separately get their own similar random effects. Additionally,
> >rows that share a measure can get their own similar random effects.
> >
> >Am I correctly describing the differences?
> >
> >So, when "~1|outcome" from `res` model, and "study/outcome" component
> >from `res2` ONLY NUMERICALLY are similar, then that means that the
> >amount of variance estimated for these two completely different types
> >of random-effects is the same; completely by coincidence.
> >
> >Thanks very much,
> >Stefanou
> >
> >On Sat, Oct 30, 2021 at 12:35 PM Stefanou Revesz
> ><stefanourevesz using gmail.com> wrote:
> >>
> >> Sure, to confirm differences between the two models, can we say model
> >> `res` (i.e., list(~ 1 | study, ~1|outcome, ~ 1 | measure)) views the
> >> random effects this way:
> >>
> >> res_model <- with(m, interaction(study,outcome,measure))
> >>
> >> But model `res2` (i.e., list(~ 1 | study/outcome, ~ 1 | measure))
> >> views random effects this way:
> >>
> >> res2_model <- with(m, interaction(interaction(study,outcome), measure))
> >>
> >> Is this correct?
> >>
> >> Stefanou
> >>
> >> On Sat, Oct 30, 2021 at 11:23 AM Viechtbauer, Wolfgang (SP)
> >> <wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> >> >
> >> > These are totally different models, so I would not read anything into this.
> >It is purely a coincidence.
> >> >
> >> > Best,
> >> > Wolfgang
> >> >
> >> > >-----Original Message-----
> >> > >From: Stefanou Revesz [mailto:stefanourevesz using gmail.com]
> >> > >Sent: Saturday, 30 October, 2021 18:19
> >> > >To: Viechtbauer, Wolfgang (SP)
> >> > >Cc: R meta
> >> > >Subject: Re: rma.mv: why some var components change but others don't across
> >2
> >> > >models
> >> > >
> >> > >Wolfgang, you're a lifesaver! That's such a confusing coincidence!
> >> > >
> >> > >As we inch toward the last few studies, the variance component for
> >> > >'outcome' across `res` (fully crossed model), and `res2` (nested +
> >> > >crossed model) get more and more similar.
> >> > >
> >> > >Does this say anything about the data structure up to these last few
> >> > >studies vs. that of the last few studies? (I'm still in shock, and
> >> > >want to rationalize why this is happening to me)
> >> > >
> >> > >res <- rma.mv(yi, vi, random = list(~ 1 | study, ~1 | outcome, ~ 1 |
> >> > >measure), data=m, subset=study <= 54)
> >> > >res2 <- rma.mv(yi, vi, random = list(~ 1 | study/outcome, ~ 1 |
> >> > >measure), data=m, subset=study <= 54)
> >> > >
> >> > >Stefanou
> >> > >
> >> > >On Sat, Oct 30, 2021 at 11:03 AM Viechtbauer, Wolfgang (SP)
> >> > ><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> >> > >>
> >> > >> The values are not exactly identical and it is coincidence that they end
> >up
> >> > >looking that way when rounded to 4 decimal places. For example try:
> >> > >>
> >> > >> res <- rma.mv(yi, vi, random = list(~ 1 | study, ~1 | outcome, ~ 1 |
> >measure),
> >> > >data=m, subset=study <= 20)
> >> > >> res2 <- rma.mv(yi, vi, random = list(~ 1 | study/outcome, ~ 1 | measure),
> >> > >data=m, subset=study <= 20)
> >> > >>
> >> > >> and they are rather different.
> >> > >>
> >> > >> Best,
> >> > >> Wolfgang
> >> > >>
> >> > >> >-----Original Message-----
> >> > >> >From: Stefanou Revesz [mailto:stefanourevesz using gmail.com]
> >> > >> >Sent: Saturday, 30 October, 2021 15:06
> >> > >> >To: Viechtbauer, Wolfgang (SP)
> >> > >> >Cc: R meta
> >> > >> >Subject: Re: rma.mv: why some var components change but others don't
> >across 2
> >> > >> >models
> >> > >> >
> >> > >> >Dear Wolfgang,
> >> > >> >
> >> > >> >Thank you for your reply. I did check that previously. But my question is
> >why
> >> > >> >'outcome' gives the same variance component across both res (with 4
> >levels)
> >> > >and
> >> > >> >res2 (with 68 levels) models?
> >> > >> >
> >> > >> >Thank you so much,
> >> > >> >Stefanou
> >> > >> >
> >> > >> >On Sat, Oct 30, 2021, 7:08 AM Viechtbauer, Wolfgang (SP)
> >> > >> ><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> >> > >> >Dear Stefanou,
> >> > >> >
> >> > >> >With the way you have 'outcome' coded, these two formulations are not
> >> > >equivalent.
> >> > >> >I believe this post discusses this:
> >> > >> >
> >> > >> >https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-July/000896.html
> >> > >> >
> >> > >> >Best,
> >> > >> >Wolfgang
> >> > >> >
> >> > >> >>-----Original Message-----
> >> > >> >>From: Stefanou Revesz [mailto:stefanourevesz using gmail.com]
> >> > >> >>Sent: Friday, 29 October, 2021 17:24
> >> > >> >>To: R meta
> >> > >> >>Cc: Viechtbauer, Wolfgang (SP)
> >> > >> >>Subject: rma.mv: why some var components change but others don't across
> >2
> >> > >models
> >> > >> >>
> >> > >> >>Dear Wolfgang and Expert List Members,
> >> > >> >>
> >> > >> >>Why `study` with 57 levels in model `res` gives `sigma^2.1 = 0.0200`
> >> > >> >>but `study` with 57 levels in model `res2` gives `sigma^2.1  =
> >> > >> >>0.0122`?
> >> > >> >>(SAME LEVELS BUT DIFFERENT RESULTS)
> >> > >> >>
> >> > >> >>Why `outcome` with 4 levels in model `res` gives `sigma^2.2 = 0.0093`
> >> > >> >>but `outcome` with 68 levels in model `res2` gives `sigma^2.2  =
> >> > >> >>0.0093`?
> >> > >> >>(DIFFERENT LEVELS BUT SAME RESULTS)
> >> > >> >>
> >> > >> >>For reproducibility, below are my data and code.
> >> > >> >>
> >> > >> >>Many thanks to you all,
> >> > >> >>Stefanou
> >> > >> >>
> >> > >> >>m <- read.csv("https://raw.githubusercontent.com/fpqq/w/main/c.csv")
> >> > >> >>
> >> > >> >>res <- rma.mv(yi, vi, random = list(~ 1 | study, ~1|outcome, ~ 1 |
> >> > >> >>measure), data=m)
> >> > >> >>                    estim       sqrt  nlvls  fixed   factor
> >> > >> >>sigma^2.1  0.0200  0.1415     57     no    study
> >> > >> >>sigma^2.2  0.0093  0.0964      4     no  outcome
> >> > >> >>sigma^2.3  0.0506  0.2249      7     no  measure
> >> > >> >>
> >> > >> >>res2 <- rma.mv(yi, vi, random = list(~ 1 | study/outcome, ~ 1 |
> >> > >> >>measure), data=m)
> >> > >> >>                    estim       sqrt  nlvls  fixed         factor
> >> > >> >>sigma^2.1  0.0122  0.1105     57     no          study
> >> > >> >>sigma^2.2  0.0093  0.0964     68     no  study/outcome
> >> > >> >>sigma^2.3  0.0363  0.1904      7     no        measure



More information about the R-sig-meta-analysis mailing list