[R-meta] Questions about Omnibus tests
Rafael Rios
bior@f@elrm @ending from gm@il@com
Tue Oct 30 18:27:14 CET 2018
Dear Michael,
Thank you for the answer. Is not Zaykin's approach applicable for a
multilevel meta-analysis? Is the best approach to use variance as a measure
of weight? Sorry if the question is too simple, but I am not convinced if I
should use standard error or variance as weight.
Best wishes,
Rafael.
__________________________________________________________
Dr. Rafael Rios Moura
*scientia amabilis*
Behavioral Ecologist, PhD
Postdoctoral Researcher
Universidade Estadual de Campinas (UNICAMP)
Campinas, São Paulo, Brazil
Currículo Lattes: http://lattes.cnpq.br/4264357546465157
ORCID: http://orcid.org/0000-0002-7911-4734
Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2
<http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4244908A8>
Em ter, 30 de out de 2018 às 10:12, Michael Dewey <lists using dewey.myzen.co.uk>
escreveu:
> Dear Rafael
>
> As far as your point 3 goes the Zaykin reference you cite is about a
> weighted version of Stouffer's method for combining p-values and
> suggests weighting by the the square root of the sample size. So I do
> not think this is relevant to the sort of analysis you are proposing.
>
> Michael
>
> On 30/10/2018 05:15, Rafael Rios wrote:
> > Dear Wolfgang,
> >
> > Thank you for the very helpful advices! I will be grateful if you could
> > help me again with my new questions. I organized them in the topics
> bellow.
> >
> > 1. Does the QM-test, with an intercept in the model, evaluates if the
> > average true outcomes of subgroups differ from the reference level or
> > from 0? I found a p>0.05, probably meaning that there is no difference
> > among subgroups. However, if you analyze the graph, there a higher
> > effect size for the subgroup of female choice compared to others. So, I
> > am not sure about the best approach to evaluate differences among
> > outcomes. Why are the graph results so different from the QM-test with
> > an intercept in the model? Should I evaluate results using
> > anova(meta,btt=1:3)?
> >
> > You also suggested that the script for pairwise comparisons was wrong.
> > According to the link that you provided, it can also be drawn
> > as summary(glht(meta, linfct=rbind(c(0,0,1), c(0,1,0), c(0,-1,1))),
> > test=adjusted("none")). Was the argument linfct=rbind(c(0,0,1)) used to
> > compare the subgroups of female choice (reference level) and male
> > choice? What am I evaluating by using summary(glht(meta,
> > linfct=rbind(female=c(1,0,0), male=c(0,1,0))), test=Chisqtest())?
> >
> > 2. Thank you for the correction of I² formula. What is the best approach
> > to measure heterogeneity in a multilevel meta-analysis? Maybe, this one:
> > http://www.metafor-project.org/doku.php/tips:i2_multilevel_multivariate
> >
> > 3. I used the standard deviation to weight the effect sizes, according
> > to Zaykin (2011). Is variance a better measure of weight than se in a
> > multilevel meta-analysis? Reference: D. V. Zaykin, Optimally weighted
> > Z-test is a powerful method for combining probabilities in
> > meta-analysis. J. Evol. Biol. 24, 1836–1841 (2011).
> >
> > 4. Finally, I agree with the exclusion of potential_sce as a random
> > variable. However, I need to control for this variable. An alternative
> > could be to include this potential_sce as a fixed variable. Is this
> > model more appropriate?: meta=rma.mv <http://rma.mv>(zf, sezf,
> > mods=~mate_choice+potential_sce, random = list (~1|effectsizeID,
> > ~1|studyID, ~1|species1), data = h_mc).
> >
> > Thank you again for the help.
> >
> > Best wishes,
> >
> > Rafael.
> > __________________________________________________________
> >
> > Dr. Rafael Rios Moura
> > /scientia amabilis/
> >
> > Behavioral Ecologist, PhD
> > Postdoctoral Researcher
> > Universidade Estadual de Campinas (UNICAMP)
> > Campinas, São Paulo, Brazil
> >
> > Currículo Lattes: http://lattes.cnpq.br/4264357546465157
> > ORCID: http://orcid.org/0000-0002-7911-4734
> > Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2
> >
> >
> >
> >
> > <http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4244908A8>
> >
> >
> >
> > Em qui, 25 de out de 2018 às 16:59, Viechtbauer, Wolfgang (SP)
> > <wolfgang.viechtbauer using maastrichtuniversity.nl
> > <mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>> escreveu:
> >
> > Dear Rafael,
> >
> > With an intercept in the model, the QM-test tests all coefficients
> > except for the intercept. In this case, those coefficients reflect
> > differences relative to the reference level defined by the
> > intercept. So, the QM-test tells you whether the average true
> > outcome is different for the various levels or not. The QM-test is
> > not significant, so there is no (statistically significant) evidence
> > that the average true outcome differs across the various levels.
> >
> > The intercept is significantly different from 0, but this is a
> > completely different hypothesis and has nothing to do with the
> > QM-test here. The intercept is the estimated average true outcome
> > for the reference level. Whether it is different from 0 has nothing
> > to do with whether the other levels are different from the reference
> > level.
> >
> > Some useful reading:
> >
> > http://www.metafor-project.org/doku.php/tips:testing_factors_lincoms
> >
> > You are also not conducting pairwise comparisons. Your code computes
> > the estimated average true outcome for various pairs of levels and
> > then chi^2 tests with df=2 are conducted to test the null hypothesis
> > that both of these average true outcomes are significantly different
> > from 0. That is not testing for the *difference* between the two
> > levels. The pairwise comparisons are:
> >
> > summary(glht(meta, linfct=rbind(c(1,0,0)-c(1,1,0))),
> test=Chisqtest())
> > summary(glht(meta, linfct=rbind(c(1,0,0)-c(1,0,1))),
> test=Chisqtest())
> > summary(glht(meta, linfct=rbind(c(1,0,1)-c(1,1,0))),
> test=Chisqtest())
> >
> > The first two are unnecessary, since the contrasts between the
> > reference level and the second and third level are already part of
> > the model output. All of these are not significant.
> >
> > As for the negative I^2 value: You are not using the correct
> > formula. It should be: 100*(106.866-102)/106.866. This can still
> > yield a negative value (in general, not in this case), in which case
> > the value is just set to 0. BUT: This equation comes from the
> > standard random-effects model (and assumes that we are using the
> > DL-estimator). You are fitting a more complex model (and using REML
> > estimation), so the usefulness of this equation in this context is
> > debatable.
> >
> > Finally, the model you are fitting is incorrectly specified. First,
> > you are setting the second argument of rma.mv <http://rma.mv>() to
> > 'sezf' (which is apparently the SE of the estimates). However, the
> > second argument is for specifying the *variances* (or an entire
> > var-cov matrix). Second, you need to add random effects
> > corresponding to the individual estimates to the model. Adding
> > 'study-level' random effects does not replace the 'estimate-level'
> > random effects in multilevel models, they both need to be added to
> > the model. See also:
> >
> >
> http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011#a_common_mistake_in_the_three-level_model
> >
> > So, you should be using:
> >
> > meta <- rma.mv <http://rma.mv>(zf, vzf, mods = ~ mate_choice, random
> > = list (~1|studyID, ~1|effectsizeID, ~1|species1, ~1|potential_sce),
> > data = h_mc)
> >
> > Whether it is appropriate/useful to add random effects corresponding
> > to the levels of 'potential_sce' is also debatable. This variable
> > only has two levels, so the estimate of the variance component for
> > this factor is going to be very imprecise (see confint(meta,
> > sigma2=4) after fitting the model above). The estimated variance for
> > this factor turns out to be 0 here, so this is identical to dropping
> > this random effect altogether, so in the end it does not matter.
> >
> > Best,
> > Wolfgang
> >
> > -----Original Message-----
> > From: R-sig-meta-analysis
> > [mailto:r-sig-meta-analysis-bounces using r-project.org
> > <mailto:r-sig-meta-analysis-bounces using r-project.org>] On Behalf Of
> > Rafael Rios
> > Sent: Thursday, 25 October, 2018 21:13
> > To: Michael Dewey
> > Cc: r-sig-meta-analysis using r-project.org
> > <mailto:r-sig-meta-analysis using r-project.org>
> > Subject: Re: [R-meta] Questions about Omnibus tests
> >
> > Dear Michael,
> >
> > Thank you for the help. Indeed, I found a significant p-value in the
> > QM-test by removing the intercept or using btt(1:3) argumment in the
> > function rma.mv <http://rma.mv>. However, using such approach, I am
> > testing if each mean
> > outcome is different than zero. However, I need to test differences
> > among
> > subgroups by including a value of reference. Such approach needs the
> > inclusion of intercept:
> >
> http://www.metafor-project.org/doku.php/tips:multiple_factors_interactions
> >
> > I am not sure about the correct approach and what results to report.
> > Can I
> > really use the QM-test without the intercept to test differences
> among
> > subgroups?
> >
> > Best wishes,
> >
> > Rafael.
> > __________________________________________________________
> >
> > Dr. Rafael Rios Moura
> > *scientia amabilis*
> >
> > Behavioral Ecologist, PhD
> > Postdoctoral Researcher
> > Universidade Estadual de Campinas (UNICAMP)
> > Campinas, São Paulo, Brazil
> >
> > Currículo Lattes: http://lattes.cnpq.br/4264357546465157
> > ORCID: http://orcid.org/0000-0002-7911-4734
> > Research Gate:
> https://www.researchgate.net/profile/Rafael_Rios_Moura2
> >
> > Em qui, 25 de out de 2018 às 12:33, Michael Dewey
> > <lists using dewey.myzen.co.uk <mailto:lists using dewey.myzen.co.uk>>
> > escreveu:
> >
> > > Dear Rafael
> > >
> > > I think the issue is that the test of the intercept tests whether
> > that
> > > might be zero whereas the test of the moderator tests whether the
> > other
> > > two coefficients are zero. If you remove the intercept from the
> model
> > > you should get a test for the moderator with 3 df (not 2 as at
> > pesent)
> > > which tests whether all three coefficients are zero which seems
> to be
> > > what you are after.
> > >
> > > Michael
> > >
> > > On 25/10/2018 16:00, Rafael Rios wrote:
> > > > Dear Wolfgang and All,
> > > >
> > > > I am conducting a meta-analysis to evaluate the effects of mate
> > choice
> > > > on the outcome. My dataset and script follow on attach. I found
> > > > conflicting results with the omnibus test. The QM-test had a
> > > > non-significant p-value, while z-test shows a significant
> > p-value for
> > > > the intercerpt (corresponding to the treatment of female
> > choice). When I
> > > > undertook pairwise comparisons, I also found differences among
> > > > treatments consistent with the z-test results. You can also
> observe
> > > > these differences in the graph. What exactly is each test (QM
> > and z)
> > > > evaluating? Why is QM-test reporting a p-value higher than
> > 0.05, even
> > > > when there is differences in pairwise comparisons? I also found
> a
> > > > negative value for I². Is there any problem with the model to
> > report
> > > > such result? My questions are organized inside the script. Any
> > help will
> > > > be welcome.
> > > >
> > > > Best wishes,
> > > >
> > > > Rafael.
> > > > __________________________________________________________
> > > >
> > > > Dr. Rafael Rios Moura
> > > > /scientia amabilis/
> > > >
> > > > Behavioral Ecologist, PhD
> > > > Postdoctoral Researcher
> > > > Universidade Estadual de Campinas (UNICAMP)
> > > > Campinas, São Paulo, Brazil
> > > >
> > > > Currículo Lattes: http://lattes.cnpq.br/4264357546465157
> > > > ORCID: http://orcid.org/0000-0002-7911-4734
> > > > Research Gate:
> > https://www.researchgate.net/profile/Rafael_Rios_Moura2
> >
>
> --
> Michael
> http://www.dewey.myzen.co.uk/home.html
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list