[R-meta] Why am I getting different means when conducting multilevel meta-analysis with factorial moderator vs. as subgroups?

Célia Sofia Moreira celiasofiamoreira at gmail.com
Wed Apr 18 19:53:02 CEST 2018


Dear Professor Wolfgang,

I have a similar issue and so I take this opportunity to pose my question:
I made separate analysis for each subgroup. However, I was advertised that
RE models with small sample sizes (small k) can be susceptible of
misleading results. The number of effect sizes per subgroup varies from 1
to 14, although I just performed RE models when k>=4. Therefore, in an
attempt to overcome this main frailty, I chose to conduct a moderator
analysis (together with small sample correction from clubSandwich). Do you
think that the results could be more reliable (or at least less
questionable) using moderator analysis, with bigger k?

However, as happened with Maximilian, I'm also getting different means.
Although the differences are small, I would like to ask your opinion about
the codes, below. Please, let me know if some corrections are needed.

I have different databases, each of them with several subgroups
("Measure"), and some studies contribute with more than one effect size
(thus correlated).
Id = Effect Id; Study = Study Id

For each subgroup:

m1<-rma(y ~ 1, v, data =base_S1); #when each study in base_S1 contributes
with exactly one effect size

m2<-rma.mv(y ~ 1, V=Vlist, random = ~ 1 | Study/Id, struct = "UN", data =
base_S2); # when there is at least on study in base_S2 contributing with
more than one effect size


For the moderator model:

m<-rma(y ~ 0 + Measure, v, data =base); #when each study in base
contributes with exactly one effect size


m<-rma.mv(y ~ 0 + Measure, V=Vlist, random = ~ 1 | Study/Id, struct = "UN",
data = base); # otherwise



Thank you very much.


Best,

celia


2018-04-18 17:36 GMT+01:00 Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl>:

> Hi Max,
>
> It's difficult to say without the actual data, but here are a few
> observations/comments.
>
> First of all, it seems to me that the 'subgroup' model is
> overparameterized. Notice that there are 7 levels for both studyid and
> sampleid, so unless these are crossed factors (which I assume they are
> not), then these two are the same. In fact, the estimated variances for
> these two random effects are identical (0.1307), so probably the actual
> variance due to this random effects is getting split between these two (or
> maybe the optimizer started with these values and got stuck there).
>
> Second, the subgroup models allow for the variance components to differ
> across tasks, but the 'moderator' model does not. The latter also assumes
> that the correlation of effects within studies and samples for different
> tasks can only be positive and that the correlation is the same between all
> pairs of tasks. That may not be true.
>
> You might try:
>
> multi.task <- rma.mv(yi=g, V = var.g, data=df, random=list(~
> factor(task.type) | sampleid, ~ factor(task.type) | studyid),
> mods=~factor(task.type)-1)
>
> which does allows for a negative correlation. With struct="HCS" you could
> also allow for different variances across tasks. And with struct="UN" you
> could also allow for different correlations. But the latter estimates 8
> variances and 12 correlations and that might be asking a bit much here.
>
> Since studyid and sampleid have nearly the same number of levels, you
> might consider dropping one of those levels. Maybe you can get this to
> converge:
>
> multi.task <- rma.mv(yi=g, V = var.g, data=df, random= ~
> factor(task.type) | sampleid, struct="UN", mods=~factor(task.type)-1)
>
> I suspect that things may start to look a bit more like the 'subgroup'
> models then.
>
> Best,
> Wolfgang
>
> -----Original Message-----
> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-
> bounces at r-project.org] On Behalf Of Maximilian Theisen
> Sent: Wednesday, 18 April, 2018 18:09
> To: r-sig-meta-analysis at r-project.org
> Subject: [R-meta] Why am I getting different means when conducting
> multilevel meta-analysis with factorial moderator vs. as subgroups?
>
> Hello!
>
> I am conducting a multilevel meta-analysis using metafor in R. I have
> effect sizes ("esid") nested within samples ("sampleid") nested within
> publications ("studyid"). I have four subgroups ("task.type").
> The mean effect sizes for each subgroup differ depending on whether I
> use task.type as a moderator or run the rma.mv command for each subgroup
> independently.
>
> This is the code I use with task.type as moderator:
> multi.task <- rma.mv(yi=g, V = var.g, data=df, random=list(~ 1 | esid,
> ~1 | sampleid, ~1 | studyid), mods=~factor(task.type)-1)
>
> Multivariate Meta-Analysis Model (k = 142; method: REML)
>
> Variance Components:
>
>             estim    sqrt  nlvls  fixed    factor
> sigma^2.1  0.0942  0.3069    142     no      esid
> sigma^2.2  0.7769  0.8814     29     no  sampleid
> sigma^2.3  0.0000  0.0001     25     no   studyid
>
> Test for Residual Heterogeneity:
> QE(df = 138) = 950.2971, p-val < .0001
>
> Test of Moderators (coefficient(s) 1:4):
> QM(df = 4) = 29.9283, p-val < .0001
>
> Model Results:
>
>                estimate      se     zval    pval    ci.lb    ci.ub
> factor(task)1    0.6072  0.2360   2.5729  0.0101   0.1446   1.0697   *
> factor(task)2   -0.5173  0.2559  -2.0212  0.0433  -1.0189  -0.0157   *
> factor(task)3    0.5755  0.2048   2.8100  0.0050   0.1741   0.9769  **
> factor(task)4    0.6173  0.4333   1.4246  0.1543  -0.2320   1.4665
>
> This is the one I use when computing the model for each task.type
> individually:
> task.X <- rma.mv(yi=g, V = var.g, data=df, subset=(task=="X"),
> random=list(~ 1 | esid, ~ 1 |sampleid, ~ 1 | studyid))
>
> Task 1:
>
> Multivariate Meta-Analysis Model (k = 27; method: REML)
>
> Variance Components:
>
>             estim    sqrt  nlvls  fixed    factor
> sigma^2.1  0.1685  0.4105     27     no      esid
> sigma^2.2  0.1307  0.3616      7     no  sampleid
> sigma^2.3  0.1307  0.3616      7     no   studyid
>
> Test for Heterogeneity:
> Q(df = 26) = 115.1759, p-val < .0001
>
> Model Results:
>
> estimate      se     zval    pval    ci.lb   ci.ub
>  -0.0649  0.2289  -0.2836  0.7767  -0.5135  0.3837
>
> Task 2:
> estimate      se    zval    pval    ci.lb   ci.ub
>   0.3374  0.6983  0.4832  0.6290  -1.0312  1.7060
>
> Task 3:
> estimate      se    zval    pval   ci.lb   ci.ub
>   0.3862  0.1254  3.0808  0.0021  0.1405  0.6319  **
>
> Task 4:
> estimate      se    zval    pval    ci.lb   ci.ub
>   0.6126  0.3409  1.7971  0.0723  -0.0555  1.2807  .
>
> Why are the results so different?
>
> Best,
> Max
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list