[R-meta] rma.mv results issue

EILYSH THOMPSON ethomp@on @end|ng |rom de@k|n@edu@@u
Mon Jan 4 05:27:18 CET 2021


Hi Wolfgang,

You were bang on the money with the random effects!! I added a random effect for estimates within papers and I'm now getting the results I was expecting. I hadn't seen people add a random effect for estimates in the tutorials online so hadn't realized that could be an issue but on reflection it makes a lot of sense. Thank you for getting back to me so quickly and helping me solve my issue.

Cheers,

Eilysh
________________________________
From: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl>
Sent: Wednesday, December 16, 2020 6:15 PM
To: EILYSH THOMPSON <ethompson using deakin.edu.au>; r-sig-meta-analysis using r-project.org <r-sig-meta-analysis using r-project.org>
Subject: RE: rma.mv results issue

Dear Eilysh,

Fo me to give a more informed answer, you would have to ideally provide a fully reproducible example that shows where you think things go wrong or at least show the output of the models.

One thing that I see though in your rma.mv() call is: random = ~1|Paper. This assumes that effects within papers are homogenous, which is a strong and often incorrect assumption. You should add random effects at the paper level (as you have done) and for estimates within papers. See:

http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011

and especially the "A Common Mistake in the Three-Level Model" section.

So, do:

abiotic.data$Estimate <- 1:nrow(abiotic.data)

abiotic.m1 <- rma.mv(yi, V = vi, mods = ~ Soil.attribute -1, random = ~1|Paper/Estimate, method = "REML", data = abiotic.data)

This already might 'fix' the issue you are seeing.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On Behalf Of EILYSH THOMPSON
>Sent: Wednesday, 16 December, 2020 7:52
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] rma.mv results issue
>
>I am currently undertaking a meta-analysis on the impacts of large invasive
>ungulates and have noticed something strange happening with the results when
>I run a rma.mv model for a section of the data. I noticed that one of my
>soil predictors (litter cover) had a mean effect size that was slightly
>positively correlated but knowing the data I'd expect it to be negatively
>correlated. All effect sizes for this predictor are negative hence I'd
>expect an overall negative correlation. I removed an outlier, it shifted it
>slightly in the negative direction. I removed some of my predictors as I
>realised I didn't have enough df in the model and that shifted it slightly
>further in a negative direction. However It wasn't until I removed the
>predictor (bare ground) that was the most significantly positively
>correlated that I got the result I was expecting with my other predictor. It
>was as if that one predictor was dragging the results in a positive
>direction. Is there any explanation as to why th is would be happening?
>Interestingly when I ran a basic random model without specifying my random
>effect and a mcmcglmm model with the exact same structure as the problem
>rma.mv model I got results closer to what I was expecting.
>
>When I run this model where I specify the random effect it is positively
>correlated (not significantly):
>abiotic.m1 <- rma.mv(yi, V = vi, mods = ~ Soil.attribute -1, random =
>~1|Paper, method = "REML", data = abiotic.data)
>
>When I run a basic random model it is negatively correlated (significantly):
>random_am <- rma(yi = yi, vi = vi,mods = ~ factor(Soil.attribute) -1, method
>= "REML", data = abiotic.data)
>
>When I run this mcmcglmm model It is negatively correlated (significantly):
>soil.m0 = MCMCglmm(fixed=yi~Soil.attribute-1, random=~Paper,
>mev=abiotic.data$vi, data=abiotic.data,prior=prior1,
>verbose=FALSE,nitt=40000, burnin=10000, thin=300)
>
>Eilysh

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list