[R-meta] Question about Meta analysis

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon May 6 17:30:04 CEST 2024


Please see my responses below.

Best,
Wolfgang

> -----Original Message-----
> From: Maximilian Steininger <maximilian.steininger using univie.ac.at>
> Sent: Monday, May 6, 2024 17:09
> To: Viechtbauer, Wolfgang (NP) <wolfgang.viechtbauer using maastrichtuniversity.nl>
> Cc: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis using r-
> project.org>
> Subject: Re: [R-meta] Question about Meta analysis
>
> Dear Wolfgang,
>
> as always many thanks!
>
> > In study 3, there is just a single row. Just to be clear: You are referring to
> a 'test-retest r = 0.9' but this has no bearing on the sampling variance in V.
> If it is a within-study design, the computation of its sampling variance already
> should have been done in such a way that any pre-post correlation is accounted
> for.
>
> I see, that clears things up.
>
> > I am trying to understand your coding for study 4 ("Within-study with one
> control and two intervention conditions"), which you coded as follows:
> >   studyid esid design subgroup type time1 time2 grp1 grp2  ne nc  yi   vi
> > 5        4    5      2        1    1     1     2    e    c  40 40 0.5 0.05
> > 6        4    6      2        1    1     1     3    e    c  40 40 0.6 0.05
> > But this coding implies that there are two independent groups, e and c, where
> e was measured at time point 1 and c at time points 2 and 3. I am not sure if I
> really understand this design.
>
> I guess in that case I just mis-specified. If it is a pure within-design (always
> same subjects in every condition) then the coding for both grp1 and grp2 is
> supposed to always have the same value (so „e" for each cell)? Seems like I got
> that wrong, thanks for making me aware.

Yes, correct. Same letter/number = same group of subjects.

> > For study 6, your coding is:
> >   studyid esid design subgroup type time1 time2 grp1 grp2  ne nc  yi   vi
> > 11       6   11      2        1    1     1     2    c    c  90 90 1.1 0.05
> > 12       6   12      2        1    2     1     2    c    c  90 90 1.2 0.05
> >
> > But I think the coding should be:
> >   studyid esid design subgroup type time1 time2 grp1 grp2  ne nc  yi   vi
> > 11       6   11      2        1    1     1     2    e    e  90 90 1.1 0.05
> > 12       6   12      2        1    2     1     2    e    e  90 90 1.2 0.05
>
> Makes sense.
>
> > In study 5, there are two subgroups. Since there is (presumably) no overlap of
> subjects across subgroups, the sampling errors across subgroups are independent,
> so we just have two cases of what we have in study 2.
>
> Agreed, and by specifying  "random = ~ 1 | studyid/esid“  in my model the
> dependency of the effect sizes from that study should be taken care of.

Yes, correct. With this, you are modeling dependency in the underlying true effects, which can still be present even if the sampling errors are independent.

> > I recently added measure "SMCRP" to escalc(). This uses the pooled SD from pre
> and post to standardize the difference.
> Great! Just out of curiosity is it based on the approach by Cousineau, 2020?
> doi: 10.20982/tqmp.16.4.p418

Yes -- see: https://wviechtb.github.io/metafor/reference/escalc.html#-a-measures-for-quantitative-variables-2

> Thanks a lot for taking the time and your help!
>
> Best,
> Max


More information about the R-sig-meta-analysis mailing list