[R-meta] Different outputs by comparing random-effects model with a MLMA without intercept
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Sun Mar 10 14:39:57 CET 2019
Dear Rafael,
Let's try this again (instead of sending an empty mail -- sorry about that!).
Indeed, the results differ because model2 estimates the variance components only based on the subset, while model1 estimates those variances based on all data. You would have to allow the variance components to differ for the "no" and "yes" levels of 'potential_sce' in 'model1' for the results to be identical. Actually, even then, I don't think you would get the exact same results, since you make use of the 'R' argument. Due to the correlation across species, the estimate (and SE) of 'potential_sceno' and 'potential_sceno' will be influenced by whatever species are included in the dataset. In the subset, certain species are not included (240 instead of 348), which is another reason why there are differences.
Best,
Wolfgang
-----Original Message-----
From: Michael Dewey [mailto:lists using dewey.myzen.co.uk]
Sent: Thursday, 07 March, 2019 18:06
To: Rafael Rios; Viechtbauer, Wolfgang (SP); r-sig-meta-analysis using r-project.org
Subject: Re: [R-meta] Different outputs by comparing random-effects model with a MLMA without intercept
Dear Rafael
I think this may be related to the issue outlined by Wolfgang in this
section of the web-site
http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
Michael
On 07/03/2019 16:46, Rafael Rios wrote:
> Dear Wolfgang and All,
>
> I am conducting a meta-analysis to evaluate potential bias of a fixed
> predictor with two subgroups (predictor: yes and no). Because I found a
> bias, I removed the values of subgroup "yes" and performed a random-effects
> model. However, when I compared the output of the first model without
> intercept with the output of the random effects model, I obtained different
> results, especially in the estimation of confidence intervals. I was
> expecting to found similar results because the model without intercept
> tests if the average outcome differs from zero. Can you explain in which
> case this can happen? Every help is welcome.
>
>
> model1=rma.mv(yi, vi, mods=~predictor-1, random = list (~1|effectsizeID,
> ~1|studyID, ~1|speciesID), R=list(speciesID=phylogenetic_correlation),
> data=h)
>
> #Multivariate Meta-Analysis Model (k = 1850; method: REML)
> #
> #Variance Components:
> # estim sqrt nlvls fixed factor R
> #sigma^2.1 0.0145 0.1204 1850 no effectsizeID no
> #sigma^2.2 0.0195 0.1397 468 no studyID no
> #sigma^2.3 0.2386 0.4885 348 no speciesID yes
> #
> #Test for Residual Heterogeneity:
> #QE(df = 1848) = 10797.5993, p-val < .0001
> #
> #Test of Moderators (coefficients 1:2):
> #QM(df = 2) = 17.6736, p-val = 0.0001
> #
> *#Model Results:*
> *# estimate se zval pval
> ci.lb <http://ci.lb> ci.ub *
> *#potential_sceno 0.2843 0.1659 1.7141 0.0865 -0.0408 0.6095 *.
> #potential_sceyes 0.3741 0.1668 2.2421 0.0250 0.0471 0.7011 *
> #---
> #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
>
> model2=rma.mv(zf, vzf, random = list (~1|effectsizeID, ~1|studyID,
> ~1|speciesID), R=list(speciesID=phylogenetic_correlation),
> data=subset(h,potential_sce=="no"))
>
> #Multivariate Meta-Analysis Model (k = 1072; method: REML)
> #
> #Variance Components:
> # estim sqrt nlvls fixed factor R
> #sigma^2.1 0.0140 0.1184 1072 no effectsizeID no
> #sigma^2.2 0.0394 0.1986 264 no studyID no
> #sigma^2.3 0.0377 0.1943 240 no speciesID yes
> #
> #Test for Heterogeneity:
> #Q(df = 1071) = 4834.5911, p-val < .0001
> #
> *#Model Results:*
> *#estimate se zval pval ci.lb <http://ci.lb> ci.ub *
> *# 0.2989 0.0720 4.1494 <.0001 0.1577 0.4401 *** *
> #---
> #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
>
> I used another data set to conduct the same approach and obtained similar
> results:
>
> dat <- dat.bangertdrowns2004
> rbind(head(dat, 10), tail(dat, 10))
> dat <- dat[!apply(dat[,c("length", "wic", "feedback", "info", "pers",
> "imag", "meta")], 1, anyNA),]
>
> head(dat)
>
> random.model=rma.mv(yi, vi, random=list(~1|id, ~1|author), structure="UN",
> data=subset(dat, subject=="Math"))
>
> random.model
>
> *#Math*
> *#Model Results:*
> *# estimate se zval pval ci.lb <http://ci.lb> ci.ub *
> *# 0.2106 0.0705 2.9899 0.0028 0.0726 0.3487 ***
>
> mixed.model=rma.mv(yi, vi, mods=~subject-1, random=list(~1|id, ~1|author),
> structure="UN", data=dat)
>
> anova(mixed.model,btt=2)
>
> *#Math*
> *# estimate se zval pval ci.lb <http://ci.lb> ci.ub*
> *# 0.2100 0.0697 3.0122 0.0026 0.0734 0.3467*
>
> Best wishes,
>
> Rafael.
> __________________________________________________________
>
> Dr. Rafael Rios Moura
> *scientia amabilis*
>
> Behavioral Ecologist, Ph.D.
> Postdoctoral Researcher
> Universidade Estadual de Campinas (UNICAMP)
> Campinas, São Paulo, Brazil
>
> ORCID: http://orcid.org/0000-0002-7911-4734
> Currículo Lattes: http://lattes.cnpq.br/4264357546465157
> Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2
More information about the R-sig-meta-analysis
mailing list