[R-meta] Questions about Omnibus tests

Viechtbauer, Wolfgang (SP) wolfg@ng@viechtb@uer @ending from m@@@trichtuniver@ity@nl
Tue Oct 30 19:27:38 CET 2018


Dear Rafael,

1. "Does the QM-test, with an intercept in the model, evaluates if the average true outcomes of subgroups differ from the reference level or from 0?"

From the reference level.

"Why are the graph results so different from the QM-test with an intercept in the model?"

Your graph is not correct. It should be:

preds <- predict(meta, newmods=rbind(c(0,0), c(1,0), c(0,1)))
forest(preds$pred, sei=preds$se, slab=c("female", "male", "mutual"))

The differences between the three levels are small.

"Should I evaluate results using anova(meta,btt=1:3)?"

anova(meta,btt=1:3) tests if all 3 groups have a zero effect. That does not test for differences between groups.

"Was the argument linfct=rbind(c(0,0,1)) used to compare the subgroups of female choice (reference level) and male choice?"

No, this compares 'mutual' with 'female'.

"What am I evaluating by using summary(glht(meta, linfct=rbind(female=c(1,0,0), male=c(0,1,0))), test=Chisqtest())"

You are evaluating whether the intercept (and hence the effect for 'female') is 0 and whether there is a difference between 'male' and 'female'.

2. "What is the best approach to measure heterogeneity in a multilevel meta-analysis?"

I don't know what is best. The link you posted provides some possibilities for computing I^2-like measures for multilevel/multivariate models.

3. "I used the standard deviation to weight the effect sizes, according to Zaykin (2011). Is variance a better measure of weight than se in a multilevel meta-analysis?"

As mentioned by Michael, this article is irrelevant.

4. "An alternative could be to include this potential_sce as a fixed variable." 

Sure.

"Is this model more appropriate?: meta=rma.mv(zf, sezf, mods=~mate_choice+potential_sce, random = list (~1|effectsizeID, ~1|studyID, ~1|species1), data = h_mc)"

You should pass the variances to the function:

meta=rma.mv(zf, vzf, mods=~mate_choice+potential_sce, random = list (~1|effectsizeID, ~1|studyID, ~1|species1), data = h_mc)

Best,
Wolfgang

-----Original Message-----
From: Rafael Rios [mailto:biorafaelrm using gmail.com] 
Sent: Tuesday, 30 October, 2018 6:16
To: Viechtbauer, Wolfgang (SP)
Cc: Michael Dewey; r-sig-meta-analysis using r-project.org
Subject: Re: [R-meta] Questions about Omnibus tests

Dear Wolfgang,

Thank you for the very helpful advices! I will be grateful if you could help me again with my new questions. I organized them in the topics bellow.

1. Does the QM-test, with an intercept in the model, evaluates if the average true outcomes of subgroups differ from the reference level or from 0? I found a p>0.05, probably meaning that there is no difference among subgroups. However, if you analyze the graph, there a higher effect size for the subgroup of female choice compared to others. So, I am not sure about the best approach to evaluate differences among outcomes. Why are the graph results so different from the QM-test with an intercept in the model? Should I evaluate results using anova(meta,btt=1:3)?

You also suggested that the script for pairwise comparisons was wrong. According to the link that you provided, it can also be drawn as summary(glht(meta, linfct=rbind(c(0,0,1), c(0,1,0), c(0,-1,1))), test=adjusted("none")). Was the argument linfct=rbind(c(0,0,1)) used to compare the subgroups of female choice (reference level) and male choice? What am I evaluating by using summary(glht(meta, linfct=rbind(female=c(1,0,0), male=c(0,1,0))), test=Chisqtest())?

2. Thank you for the correction of I² formula. What is the best approach to measure heterogeneity in a multilevel meta-analysis? Maybe, this one: http://www.metafor-project.org/doku.php/tips:i2_multilevel_multivariate

3. I used the standard deviation to weight the effect sizes, according to Zaykin (2011). Is variance a better measure of weight than se in a multilevel meta-analysis? Reference: D. V. Zaykin, Optimally weighted Z-test is a powerful method for combining probabilities in meta-analysis. J. Evol. Biol. 24, 1836–1841 (2011).

4. Finally, I agree with the exclusion of potential_sce as a random variable. However, I need to control for this variable. An alternative could be to include this potential_sce as a fixed variable. Is this model more appropriate?: meta=rma.mv(zf, sezf, mods=~mate_choice+potential_sce, random = list (~1|effectsizeID, ~1|studyID, ~1|species1), data = h_mc).

Thank you again for the help.

Best wishes,

Rafael.
__________________________________________________________

Dr. Rafael Rios Moura
scientia amabilis

Behavioral Ecologist, PhD
Postdoctoral Researcher
Universidade Estadual de Campinas (UNICAMP)
Campinas, São Paulo, Brazil

Currículo Lattes: http://lattes.cnpq.br/4264357546465157
ORCID: http://orcid.org/0000-0002-7911-4734
Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2


More information about the R-sig-meta-analysis mailing list