[R-meta] Why does rma.mv does not show the same results as robumeta?
James Pustejovsky
jepu@to @end|ng |rom gm@||@com
Mon May 24 04:44:20 CEST 2021
Hi Cátia,
Here are links to some previous listservs discussions on this topic:
https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2017-September/000223.html
https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2017-August/000130.html
From the small snippet of the data you sent, it looks like the predictor
variable you're interested in might vary at the within-study level (i.e.,
some studies have effect sizes for multiple groups, such as DLD and TD).
Is that correct? Is there a lot of variation within studies? If so, this
sort of data structure is one where the methods implemented in robumeta
tend to have lower power than what you get with rma.mv() + clubSandwich (as
discussed in the paper that Wolfgang linked). That might therefore be
reason to prefer the metafor model.
One other thing to note. It looks like in your rma.mv() syntax, you are
treating every effect size estimate as independent, rather than allowing
for some correlation between effect size estimates from the same sample. If
you have multiple estimates based on the same sample, it would probably be
better to treat them as having correlated sampling errors, using the
methods described in the paper Wolfgang linked to, as well as in this blog
post:
https://www.jepusto.com/imputing-covariance-matrices-for-multi-variate-meta-analysis/
Kind Regards,
James
On Sun, May 23, 2021 at 2:00 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> I would suggest to take a look at:
>
> https://www.jepusto.com/publication/rve-meta-analysis-expanding-the-range/
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: Cátia Ferreira De Oliveira [mailto:cmfo500 using york.ac.uk]
> >Sent: Sunday, 23 May, 2021 19:54
> >To: Viechtbauer, Wolfgang (SP)
> >Cc: r-sig-meta-analysis using r-project.org
> >Subject: Re: [R-meta] Why does rma.mv does not show the same results as
> robumeta?
> >
> >Thank you for your quick response!
> >Is there any good source of information on which option would be the most
> adequate
> >for meta-analysis with dependencies, i.e. whether one should just use a)
> rma.mv;
> >b) rma.mv + robust() or clubSandwich() or c) robumeta?
> >
> >Thank you!
> >
> >Best wishes,
> >
> >Catia
> >
> >On Sun, 23 May 2021 at 17:34, Viechtbauer, Wolfgang (SP)
> ><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
> >Dear Cátia,
> >
> >robumeta uses robust variance estimation. If you want to do the same
> based on an
> >'rma.mv' object, you need to use robust() or, even better, the
> clubSandwich
> >package. See here for examples:
> >
> >https://wviechtb.github.io/metafor/reference/robust.html
> >
> >However, the results still won't be exactly the same. There is at least
> one post
> >in the archives that discusses the somewhat subtle differences. If you go
> here:
> >
> >
> https://www.google.com/search?hl=EN&source=hp&q=site:https://stat.ethz.ch/pipermai
> >l/r-sig-meta-analysis
> >
> >you can add some appropriate search strings to find those posts (I
> believe it was
> >James Pustejovksy that explained this quite thoroughly, so you might want
> to
> >include 'James' in your search terms).
> >
> >Best,
> >Wolfgang
> >
> >>-----Original Message-----
> >>From: R-sig-meta-analysis [mailto:
> r-sig-meta-analysis-bounces using r-project.org] On
> >>Behalf Of Cátia Ferreira De Oliveira
> >>Sent: Sunday, 23 May, 2021 3:51
> >>To: r-sig-meta-analysis using r-project.org
> >>Subject: [R-meta] Why does rma.mv does not show the same results as
> robumeta?
> >>
> >>Hello,
> >>
> >>I have conducted a meta-analysis that I am currently analysing looking
> at the
> >>relationship between memory and language/literacy and multiple studies
> >contributed
> >>more than one effect size. I have preregistered doing the analyses in
> robumeta.
> >>But I am interested in checking how the results converge across packages
> as I am
> >>tempted to use metafor for my next meta-analysis given how easy it is to
> plot,
> >>check for publication bias, etc with this package. When running both
> models, they
> >>produced different results and I am a bit unsure as to why they are
> different. I
> >>know if I look at the estimates it is not that different, but what
> surprises me
> >is
> >>the fact that DD has a higher estimate in one model but in the other it
> is the
> >DLD
> >>group. Maybe I have done something wrong. Does anyone have any thoughts?
> >>
> >># multilevel model looking at the relationship between memory and
> >>language/literacy;
> >># multiple studies have contributed multiple effect sizes
> >>
> >>head(Data)
> >>
> >>rma.model <- rma.mv(yi, vi, mods = ~ factor(Group)-1, random= ~ 1 |
> >>Study/effectsizeID, data=Data)
> >>res
> >>
> >>Multivariate Meta-Analysis Model (k = 414; method: REML)
> >>
> >> logLik Deviance AIC BIC AICc
> >>-13.0662 26.1323 36.1323 56.2253 36.2805
> >>
> >>Variance Components:
> >>
> >> estim sqrt nlvls fixed factor
> >>sigma^2.1 0.0109 0.1044 37 no Study
> >>sigma^2.2 0.0082 0.0903 414 no Study/effectsizeID
> >>
> >>Test for Residual Heterogeneity:
> >>QE(df = 411) = 588.9613, p-val < .0001
> >>
> >>Test of Moderators (coefficients 1:3):
> >>QM(df = 3) = 11.1370, p-val = 0.0110
> >>
> >>Model Results:
> >>
> >>robu.model <- robu(formula = yi ~ factor(Group)-1, data = Data,
> >> studynum = Study, var.eff.size = vi,
> >> rho = .8, small = TRUE)
> >>print(robu.model)
> >>
> >>RVE: Correlated Effects Model with Small-Sample Corrections
> >>
> >>Model: yi ~ factor(Group) - 1
> >>
> >>Number of studies = 37
> >>Number of outcomes = 414 (min = 1 , mean = 11.2 , median = 6 , max = 52 )
> >>Rho = 0.8
> >>I.sq = 52.35398
> >>Tau.sq = 0.02918897
> >>
> >>Thank you!
> >>
> >>Best wishes,
> >>
> >>Catia
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list