[R-meta] Questions about Omnibus tests

Viechtbauer, Wolfgang (SP) wolfg@ng@viechtb@uer @ending from m@@@trichtuniver@ity@nl
Thu Oct 25 21:58:56 CEST 2018


Dear Rafael,

With an intercept in the model, the QM-test tests all coefficients except for the intercept. In this case, those coefficients reflect differences relative to the reference level defined by the intercept. So, the QM-test tells you whether the average true outcome is different for the various levels or not. The QM-test is not significant, so there is no (statistically significant) evidence that the average true outcome differs across the various levels.

The intercept is significantly different from 0, but this is a completely different hypothesis and has nothing to do with the QM-test here. The intercept is the estimated average true outcome for the reference level. Whether it is different from 0 has nothing to do with whether the other levels are different from the reference level.

Some useful reading:

http://www.metafor-project.org/doku.php/tips:testing_factors_lincoms

You are also not conducting pairwise comparisons. Your code computes the estimated average true outcome for various pairs of levels and then chi^2 tests with df=2 are conducted to test the null hypothesis that both of these average true outcomes are significantly different from 0. That is not testing for the *difference* between the two levels. The pairwise comparisons are:

summary(glht(meta, linfct=rbind(c(1,0,0)-c(1,1,0))), test=Chisqtest())
summary(glht(meta, linfct=rbind(c(1,0,0)-c(1,0,1))), test=Chisqtest())
summary(glht(meta, linfct=rbind(c(1,0,1)-c(1,1,0))), test=Chisqtest())

The first two are unnecessary, since the contrasts between the reference level and the second and third level are already part of the model output. All of these are not significant.

As for the negative I^2 value: You are not using the correct formula. It should be: 100*(106.866-102)/106.866. This can still yield a negative value (in general, not in this case), in which case the value is just set to 0. BUT: This equation comes from the standard random-effects model (and assumes that we are using the DL-estimator). You are fitting a more complex model (and using REML estimation), so the usefulness of this equation in this context is debatable.

Finally, the model you are fitting is incorrectly specified. First, you are setting the second argument of rma.mv() to 'sezf' (which is apparently the SE of the estimates). However, the second argument is for specifying the *variances* (or an entire var-cov matrix). Second, you need to add random effects corresponding to the individual estimates to the model. Adding 'study-level' random effects does not replace the 'estimate-level' random effects in multilevel models, they both need to be added to the model. See also:

http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011#a_common_mistake_in_the_three-level_model

So, you should be using:

meta <- rma.mv(zf, vzf, mods = ~ mate_choice, random = list (~1|studyID, ~1|effectsizeID, ~1|species1, ~1|potential_sce), data = h_mc)

Whether it is appropriate/useful to add random effects corresponding to the levels of 'potential_sce' is also debatable. This variable only has two levels, so the estimate of the variance component for this factor is going to be very imprecise (see confint(meta, sigma2=4) after fitting the model above). The estimated variance for this factor turns out to be 0 here, so this is identical to dropping this random effect altogether, so in the end it does not matter.

Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Rafael Rios
Sent: Thursday, 25 October, 2018 21:13
To: Michael Dewey
Cc: r-sig-meta-analysis using r-project.org
Subject: Re: [R-meta] Questions about Omnibus tests

Dear Michael,

Thank you for the help. Indeed, I found a significant p-value in the
QM-test by removing the intercept or using btt(1:3) argumment in the
function rma.mv. However, using such approach, I am testing if each mean
outcome is different than zero. However, I need to test differences among
subgroups by including a value of reference. Such approach needs the
inclusion of intercept:
http://www.metafor-project.org/doku.php/tips:multiple_factors_interactions

I am not sure about the correct approach and what results to report. Can I
really use the QM-test without the intercept to test differences among
subgroups?

Best wishes,

Rafael.
__________________________________________________________

Dr. Rafael Rios Moura
*scientia amabilis*

Behavioral Ecologist, PhD
Postdoctoral Researcher
Universidade Estadual de Campinas (UNICAMP)
Campinas, São Paulo, Brazil

Currículo Lattes: http://lattes.cnpq.br/4264357546465157
ORCID: http://orcid.org/0000-0002-7911-4734
Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2

Em qui, 25 de out de 2018 às 12:33, Michael Dewey <lists using dewey.myzen.co.uk>
escreveu:

> Dear Rafael
>
> I think the issue is that the test of the intercept tests whether that
> might be zero whereas the test of the moderator tests whether the other
> two coefficients are zero. If you remove the intercept from the model
> you should get a test for the moderator with 3 df (not 2 as at pesent)
> which tests whether all three coefficients are zero which seems to be
> what you are after.
>
> Michael
>
> On 25/10/2018 16:00, Rafael Rios wrote:
> > Dear Wolfgang and All,
> >
> > I am conducting a meta-analysis to evaluate the effects of mate choice
> > on the outcome. My dataset and script follow on attach. I found
> > conflicting results with the omnibus test. The QM-test had a
> > non-significant p-value, while z-test shows a significant p-value for
> > the intercerpt (corresponding to the treatment of female choice). When I
> > undertook pairwise comparisons, I also found differences among
> > treatments consistent with the z-test results. You can also observe
> > these differences in the graph. What exactly is each test (QM and z)
> > evaluating? Why is QM-test reporting a p-value higher than 0.05, even
> > when there is differences in pairwise comparisons? I also found a
> > negative value for I². Is there any problem with the model to report
> > such result? My questions are organized inside the script. Any help will
> > be welcome.
> >
> > Best wishes,
> >
> > Rafael.
> > __________________________________________________________
> >
> > Dr. Rafael Rios Moura
> > /scientia amabilis/
> >
> > Behavioral Ecologist, PhD
> > Postdoctoral Researcher
> > Universidade Estadual de Campinas (UNICAMP)
> > Campinas, São Paulo, Brazil
> >
> > Currículo Lattes: http://lattes.cnpq.br/4264357546465157
> > ORCID: http://orcid.org/0000-0002-7911-4734
> > Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2


More information about the R-sig-meta-analysis mailing list