[R-meta] different results with rma and rma.mv functions

Viechtbauer Wolfgang (SP) wolfgang.viechtbauer at maastrichtuniversity.nl
Thu Jul 6 20:49:09 CEST 2017


Dear Angela,

You have not explained what kind of predictors are included in your model, but I assume you have a factor in your model. Here is an example:

library(metafor)
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
res <- rma(yi, vi, mods = ~ factor(alloc), data=dat)
res

levels(factor(dat$alloc))

The 'alternate' level is the reference level. The intercept is therefore the estimated effect for this level. The p-value for the intercept tests whether the estimate for this level is significantly different from 0. The coefficients for 'factor(alloc)random' and 'factor(alloc)systematic' are the estimated *differences* between 'random' and 'alternate' and between 'systematic' and 'alternate'. The p-values for these two coefficients test whether the *differences* are significantly different from 0. The QM-test of these two coefficients is a test of the factor as a whole (i.e., it tests the null hypothesis that all the true effects are all the same).

Let's remove the intercept:

res <- rma(yi, vi, mods = ~ factor(alloc) - 1, data=dat)
res

Now 'factor(alloc)alternate', 'factor(alloc)random', and 'factor(alloc)systematic' are the estimated effects for the three levels. The p-values for the three coefficients test whether the estimates are significantly different from 0. The QM-test tests the null hypothesis that all three true effects are 0.

Note that this behavior is not metafor specific, but lm(), glm(), and other modeling functions behave the same way.

For a more thorough discussion, see also: http://www.metafor-project.org/doku.php/tips:testing_factors_lincoms

Best,
Wolfgang

-- 
Wolfgang Viechtbauer, Ph.D., Statistician | Department of Psychiatry and    
Neuropsychology | Maastricht University | P.O. Box 616 (VIJV1) | 6200 MD    
Maastricht, The Netherlands | +31 (43) 388-4170 | http://www.wvbauer.com    

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
>project.org] On Behalf Of Angela Andrea Camargo Sanabria
>Sent: Thursday, July 06, 2017 17:07
>To: r-sig-meta-analysis at r-project.org
>Subject: [R-meta] different results with rma and rma.mv functions
>
>Dear R-community:
>
>I have run a mixed effects model with moderators (and also a multilevel
>mixed effects model) but I get different results when I remove and not
>remove the intercept. The differences are in the p-val of the test of
>moderators (QM) and the level of significance for each estimate. Do you
>have any idea what is going on?
>
>I use R version 3.3.2 and metafor version 1.9.9
>
>I appreciate your help. Thank you very much.
>
>Best,
>----------------------------------------------
>*Angela Andrea Camargo Sanabria*
>Becaria Postdoctoral
>Laboratorio de Análisis para la Conservación de la Biodiversidad
>Instituto de Investigaciones sobre los Recursos Naturales (INIRENA)
>Universidad Michoacana de San Nicolás de Hidalgo (UMSNH)
>skype: angela.camargo26


More information about the R-sig-meta-analysis mailing list