[R-meta] Comparing estimates of independent subgroups

James Pustejovsky jepusto at gmail.com
Wed Oct 4 20:47:44 CEST 2017


Roger,

I think the issue is in the rma.mv syntax. The model that you fit is not
equivalent to the two separate models---it has only three variance
components, rather than four. To allow for separate variances by allocation
at both the publication level and the trial level, the publication-level
random effects and trial-level random effects need to be specified as
separate terms:

(rma.mv(yi, vi, mods = ~ alloc,
        random = list(~ alloc | publication, ~ alloc | trial),
        struct= c("DIAG","DIAG"),
        data=dat, digits=3))

I hadn't realized before that the struct argument can take a vector with
one character string for each element in the random list. It looks like the
arguments will be recycled, so specifying struct = "DIAG" will give an
equivalent result.

Cheers,
James


On Mon, Oct 2, 2017 at 9:42 AM, Martineau, Roger <Roger.Martineau at agr.gc.ca>
wrote:

> Hi list members,
>
> Dr Viechtbauer used the BCG vaccine meta-analysis to illustrate how the
> difference between estimates from independent meta-analyses (or subgroups)
> can be tested: http://www.metafor-project.org/doku.php/tips:comp_two_
> independent_estimates
>
> In this example, 2 RE models were fitted separately to each subset of the
> “alloc” variable (i.e., random/other) using the rma ( ) function. Then, the
> 2 estimates were compared using a FE model. This example was for a simple
> 2-level model. I would like to know if the example could be extended to a
> 3-level model?
>
> Therefore, let’s create a new variable “publication” (A, B, C, D and E)
> and assume that “trial” (1 to 13) is nested within “publication”:
>
> library(metafor)
> dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg,
> data=dat.bcg)
> dat$alloc <- ifelse(dat$alloc == "random", "random", "other")
> dat$publication <- ifelse(dat$trial<=4, "A",
>                           ifelse(dat$trial>4&dat$trial<=6, "B",
>                                  ifelse(dat$trial>6&dat$trial<=9, "C",
>                                         ifelse(dat$trial>9&dat$trial<=11,
> "D", "E"))))
>
> The 2 separate RE multilevel models would be:
> (res1a <- rma.mv(yi, vi, data=dat, subset=alloc=="random",random =
> ~1|publication/trial))
> (res2a <- rma.mv(yi, vi, data=dat, subset=alloc=="other",random =
> ~1|publication/trial))
>
> (dat.comp.a <- data.frame(estimate = c(coef(res1a), coef(res2a)),
>                         stderror = c(res1a$se, res2a$se),
>                         meta = c("random","other"),
>                         sigma2.1 = round(c(res1a$sigma2[1],
> res2a$sigma2[1]),3),
>                         sigma2.2 = round(c(res1a$sigma2[2],
> res2a$sigma2[2]),3)))
>     estimate  stderror   meta sigma2.1 sigma2.2
> 1 -0.9386383 0.4183251 random    0.246    0.238
> 2 -0.4757994 0.2266656  other    0.016    0.204
>
> (rma(estimate, sei=stderror, mods = ~ meta, method="FE", data=dat.comp.a,
> digits=3))
> Model Results:
>             estimate     se    zval   pval   ci.lb   ci.ub
> intrcpt       -0.476  0.227  -2.099  0.036  -0.920  -0.032  *
> metarandom    -0.463  0.476  -0.973  0.331  -1.395   0.470
> The difference between the 2 estimates is not significant (z-value =
> -0.973 with P-value = 0.331).
>
> Fitting a meta-regression multilevel model using all studies with
> different amount of residual heterogeneity in each each subset of “alloc”:
> (rma.mv(yi, vi, mods = ~ alloc, random = ~alloc|publication/trial,
> struct="DIAG", data=dat, digits=3))
>
> Model Results:
>
>              estimate     se    zval   pval   ci.lb  ci.ub
>
> intrcpt        -0.449  0.460  -0.975  0.330  -1.351  0.453
>
> allocrandom    -0.487  0.666  -0.732  0.464  -1.793  0.818
> The z-value, P-value, coefficients and SE do not match the results
> obtained in the 2 separate RE multilevel models.
>
> Any assistance would be greatly appreciated,
>
> Roger ☺
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list