[R-meta] different results with rma and rma.mv functions
Viechtbauer Wolfgang (SP)
wolfgang.viechtbauer at maastrichtuniversity.nl
Thu Jul 6 20:49:09 CEST 2017
You have not explained what kind of predictors are included in your model, but I assume you have a factor in your model. Here is an example:
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
res <- rma(yi, vi, mods = ~ factor(alloc), data=dat)
The 'alternate' level is the reference level. The intercept is therefore the estimated effect for this level. The p-value for the intercept tests whether the estimate for this level is significantly different from 0. The coefficients for 'factor(alloc)random' and 'factor(alloc)systematic' are the estimated *differences* between 'random' and 'alternate' and between 'systematic' and 'alternate'. The p-values for these two coefficients test whether the *differences* are significantly different from 0. The QM-test of these two coefficients is a test of the factor as a whole (i.e., it tests the null hypothesis that all the true effects are all the same).
Let's remove the intercept:
res <- rma(yi, vi, mods = ~ factor(alloc) - 1, data=dat)
Now 'factor(alloc)alternate', 'factor(alloc)random', and 'factor(alloc)systematic' are the estimated effects for the three levels. The p-values for the three coefficients test whether the estimates are significantly different from 0. The QM-test tests the null hypothesis that all three true effects are 0.
Note that this behavior is not metafor specific, but lm(), glm(), and other modeling functions behave the same way.
For a more thorough discussion, see also: http://www.metafor-project.org/doku.php/tips:testing_factors_lincoms
Wolfgang Viechtbauer, Ph.D., Statistician | Department of Psychiatry and
Neuropsychology | Maastricht University | P.O. Box 616 (VIJV1) | 6200 MD
Maastricht, The Netherlands | +31 (43) 388-4170 | http://www.wvbauer.com
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
>project.org] On Behalf Of Angela Andrea Camargo Sanabria
>Sent: Thursday, July 06, 2017 17:07
>To: r-sig-meta-analysis at r-project.org
>Subject: [R-meta] different results with rma and rma.mv functions
>I have run a mixed effects model with moderators (and also a multilevel
>mixed effects model) but I get different results when I remove and not
>remove the intercept. The differences are in the p-val of the test of
>moderators (QM) and the level of significance for each estimate. Do you
>have any idea what is going on?
>I use R version 3.3.2 and metafor version 1.9.9
>I appreciate your help. Thank you very much.
>*Angela Andrea Camargo Sanabria*
>Laboratorio de Análisis para la Conservación de la Biodiversidad
>Instituto de Investigaciones sobre los Recursos Naturales (INIRENA)
>Universidad Michoacana de San Nicolás de Hidalgo (UMSNH)
More information about the R-sig-meta-analysis