[R-meta] Multivariate meta regression and predict for robust estimates

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Thu Oct 21 14:46:17 CEST 2021


Going to jump in here with respect to question B)

I don't think (James -- please correct me if I overlooked something) that there is something like predict() in clubSandwich. However, one could be a bit sneaky and put the clubSandwich results into a metafor object and then proceed with predict(). An example:

dat <- dat.bornmann2007
dat <- escalc(measure="OR", ai=waward, n1i=wtotal, ci=maward, n2i=mtotal, data=dat)
res <- rma.mv(yi, vi, mods = ~ type, random = ~ 1 | study/obs, data=dat)
res

sav <- robust(res, cluster=dat$study)
sav

# corresponding clubSandwich results
tmp1 <- coef_test(res, vcov="CR2", cluster=dat$study)
tmp2 <- conf_int(res, vcov="CR2", cluster=dat$study)
tmp3 <- Wald_test(res, constraints=constrain_zero(res$btt), vcov="CR2", cluster=dat$study)
tmp1
tmp2
tmp3

# force those results into 'sav'
sav$b     <- sav$beta <- tmp1$beta
sav$se    <- tmp1$SE
sav$zval  <- tmp1$tstat
sav$ddf   <- tmp1$df
sav$pval  <- tmp1$p_Satt
sav$ci.lb <- tmp2$CI_L
sav$ci.ub <- tmp2$CI_U
sav$vb    <- vcovCR(res, cluster=dat$study, type="CR2")
sav$QM    <- tmp3$Fstat
sav$QMdf  <- c(tmp3$df_num, round(tmp3$df_denom,2))
sav$QMp   <- tmp3$p_val
sav

# now proceed with predict()
predict(res, newmods=0:1, transf=exp)

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Ivan Jukic
>Sent: Thursday, 21 October, 2021 10:57
>To: Reza Norouzian
>Cc: r-sig-meta-analysis using r-project.org
>Subject: Re: [R-meta] Multivariate meta regression and predict for robust
>estimates
>
>Dear Reza,
>
>thank you for responding and providing such a great example (walkthrough). I'm
>glad that you covered all three scenarios because I was thinking before about
>aggregating my effect sizes and therefore "reducing" my data structure from your
>scenario (3) to scenario (1). It seems that I was on the right track, but I don't
>want to aggregate effect sizes anymore, so I'll stick with a third scenario you
>described.
>
>Thank you for correcting yourself (and for responding so late in the night). I
>really appreciate it!
>
>I actually tried out your examples right after you first responded and realised
>what's missing in the second model, so all good. With regards to the SATcoaching
>example, how so? Verbal and math tests are repeated in three studies, but I guess
>the participants providing these scores are independent (I'm not sure about the
>study by Burke, though). You mean no repetition of the same level of outcome
>occurs within the same sample, perhaps?
>
>Based on your response, I would like to add two (related) things.
>
>1) The second and third models should effectively be the same, and they are,
>after adding what was missing to the second one (~ 1 | es_id). While the syntax
>of the third one makes a lot of sense, I'm struggling to understand the syntax of
>the second one, and ultimately, why are they the same?
>
>2) When you say "coded for" and "haven't coded for" the design-related feature(s)
>you are literaly refering to having vs not having all "columns" related to study,
>groups, and outcomes properly aligned, right? I guess it's hard for me to relate
>as I always have these three togeather with es_id (or row_id, as you say) as a
>fourth one.
>
>Thank you very much for your time,
>Ivan
>
>
>
>From: Reza Norouzian <rnorouzian using gmail.com>
>Sent: Thursday, 21 October 2021 7:36 PM
>To: Ivan Jukic <ivan.jukic using aut.ac.nz>
>Cc: r-sig-meta-analysis using r-project.org <r-sig-meta-analysis using r-project.org>
>Subject: Re: [R-meta] Multivariate meta regression and predict for robust
>estimates
>
>I guess I responded too quickly (1:30 am answer effect:). CORRECTION:
>
>First, if your data is just like clubSandwich::SATcoaching, then yes
>your current model works, as no repetition of the same levels of
>outcome occurs.
>
>Second, in my own second model, you can account for repetition of the
>same levels of outcome by adding random row effects:
>
>rma.mv(yi, V, random = list(~ outcome | study, ~ outcome |
>interaction(study, group), ~1|row_id), struct = c("UN","UN"))
>
>Now, this model will recognize the repetition of the same levels of outcome.
>
>Sorry for the confusion,
>Reza
>
>
>On Thu, Oct 21, 2021 at 12:15 AM Reza Norouzian <rnorouzian using gmail.com> wrote:
>>
>> Dear Ivan,
>>
>> I leave question (B) to James or Wolfgang (or other list members).
>> Regarding question (A), I discuss three situations.
>>
>> First, you current model assumes that in each study, the same levels
>> of outcome don't repeat, something along the lines of:
>>
>> study  outcome
>> 1      A
>> 1      B
>> 2      A
>> 2      B
>> 3      B
>> 4      A
>>
>> If your data has the above structure, then your current model seems
>> reasonable. It assumes that levels of outcome are correlated with one
>> another in each study across all studies.
>>
>> Since you have assumed a UN structure and a V matrix, your more
>> frequently occurring levels of outcome lend support to less frequently
>> occurring levels of outcome thereby improving the fixed coefficients
>> (in terms of bias) and the standard errors (in terms of magnitude) of
>> the less frequently occurring levels of outcome.
>>
>> Second, if your data structure is more along the lines of:
>>
>> study group outcome
>>     1     1       A
>>     1     1       B
>>     1     2       A
>>     1     2       B
>>     2     1       A
>>     2     1       B
>>     2     2       A
>>     2     2       B
>>     3     1       B
>>     4     1       A
>>
>> That is, only due to a particular "coded for" design-related feature
>> (e.g., some studies having more than one treatment group), you can
>> have the same levels of outcome (e.g., A) repeated in some studies,
>> then, you can try:
>>
>> rma.mv(yi, V, random = list(~ outcome | study, ~ outcome |
>> interaction(study, group) struct = c("UN","UN"))
>>
>> Or simplify the `struct =` (perhaps to "HCS" in case of overparameterization).
>>
>> This second model assumes that in addition to the study-level
>> correlations between the levels of outcome, we can have separate
>> group-level correlations between the levels of outcome. This will then
>> recognize the repetition of the same levels of outcome due to the
>> existence of multi-group studies.
>>
>> A third situation might be that your data structure is exactly like
>> above (i.e., the same levels of outcome repeat in some studies) but
>> that you "haven't coded for" the design-related feature that has
>> caused that repetition, that is:
>>
>> study outcome  row_id
>>     1       A  1
>>     1       B  2
>>     1       A  3
>>     1       B  4
>>     2       A  5
>>     2       B  6
>>     2       A  7
>>     2       B  8
>>     3       B  9
>>     4       A  10
>>
>> Then, you can try:
>>
>> rma.mv(yi, V, random = list(~ outcome | study, ~ 1| row_id, struct = "UN"))
>>
>> This last model shares the same assumption at the study-level with the
>> previous models, but then it simply allows each level of outcome to be
>> heterogeneous (have variation in it) accounting for the repetitions of
>> the same level of outcome.
>>
>> Kind regards,
>> Reza
>>
>>
>>
>> On Wed, Oct 20, 2021 at 10:46 PM Ivan Jukic <ivan.jukic using aut.ac.nz> wrote:
>> >
>> > Dear all,
>> >
>> > Let's say that one wants to perform a multivariate random-effects meta
>regression where the data structure can be described as follows: 1) There are 2
>outcomes; 2) there is a continious moderator of interest; 3) all studies reported
>on both outcomes; and 4) most of the studies reported multiple effect sizes for
>at least one of the outcomes. This means that some participants, from certain
>groups and for a given outcome, provided data multiple times.
>> >
>> > Following the examples below (where 1 is extremely relevant)
>> >
>> > 1. https://www.jepusto.com/imputing-covariance-matrices-for-multi-variate-
>meta-analysis/
>> > 2. http://www.metafor-project.org/doku.php/analyses:berkey1998
>> > 3. https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2017-August/000097.html
>> >
>> > I would specify the model as follows:
>> >
>> > res <- rma.mv(yi = yi,
>> >                   V = V,
>> >                   data = dat,
>> >                   random = ~ outcome | study,
>> >                   method = "REML",
>> >                   test = "t",
>> >                   slab = study,
>> >                   struct = "UN",
>> >                   mods = ~ mod1*outcome)
>> >
>> > A) I'm wondering if this would account for the fact that there are multiple
>effect sizes coming from the same study for a given outcome? In a "regular"
>multilevel model, I would typically have study/es_id.
>> >
>> > B) In addition, is anyone aware of the predict function that could be used
>with robust estimates (e.g., after using coef_test from clubSandwich package)?
>Predict.rma.mv works wonderfuly in combination with robust from metafor, but I
>would like to take the advantage of clubSandwich's "CR2" that should in principle
>lead to more accurate results in small samples.
>> >
>> > There is something similar that apparently works with robu package.
>> > https://rdrr.io/github/zackfisher/robumeta/src/R/predict.robu.R
>> >
>> > Thank you for your time,
>> > Ivan



More information about the R-sig-meta-analysis mailing list