[R-meta] Different outputs by comparing random-effects model with a MLMA without intercept

Rafael Rios b|or@|@e|rm @end|ng |rom gm@||@com
Wed Mar 13 18:12:19 CET 2019


Dear Wolfgang,

I am sorry for my last question without providing some data. I simulated a
different situation using part of my data set. I found than the average
effect size differed from zero in the subgroup "no" from variable
"potential_sce" using the full data. After exclusion of data related to the
subgroup "yes" from variable "potential_sce" , I conducted a random-effects
MLMA and found that the average effect size did not differ from zero. Which
one of the two approaches would be correct? The files follow on attached.

library(metafor)
library(ape)

### Data
h1=read.csv2('full_data.csv', dec='.')
summary(h1)

h2=read.csv2('data_without_psce.csv', dec='.')
summary(h2)

### Phylogenies and correlations
#tree_full_data
pt1<-read.tree(file=file.choose(), text=NULL, tree.names=NULL, skip=0)

corr1=vcv(pt1, corr=T)

#tree_data_without_psce
pt2<-read.tree(file=file.choose(), text=NULL, tree.names=NULL, skip=0)

corr2=vcv(pt2, corr=T)


### Models
meta1=rma.mv(zf, sezf, mods=~potential_sce-1, random = list (~1|studyID,
~1|speciesID), R=list(speciesID=corr1), data = h1)
meta1

meta2=rma.mv(zf, sezf, random = list (~1|studyID, ~1|speciesID),
R=list(speciesID=corr2), data=h2)
meta2

Best wishes,

Rafael.
__________________________________________________________

Dr. Rafael Rios Moura
*scientia amabilis*

Behavioral Ecologist, Ph.D.
Postdoctoral Researcher
Universidade Estadual de Campinas (UNICAMP)
Campinas, São Paulo, Brazil

ORCID: http://orcid.org/0000-0002-7911-4734
Currículo Lattes: http://lattes.cnpq.br/4264357546465157
Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2




<http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4244908A8>


Em seg, 11 de mar de 2019 às 05:57, Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer using maastrichtuniversity.nl> escreveu:

> Dear Rafael,
>
> I cannot even attempt an answer to that question without a full
> understanding of the problem and data that you are working with.
>
> Best,
> Wolfgang
>
> -----Original Message-----
> From: Rafael Rios [mailto:biorafaelrm using gmail.com]
> Sent: Sunday, 10 March, 2019 15:02
> To: Viechtbauer, Wolfgang (SP)
> Cc: Michael Dewey; r-sig-meta-analysis using r-project.org
> Subject: Re: [R-meta] Different outputs by comparing random-effects model
> with a MLMA without intercept
>
> Thanks for the answers, Michael and Wolfgang. I suspected some effects of
> the random variables. Since I want to test whether the average effect size
> differs from zero in the data without a potential_sce bias (subgroup "no"),
> which of the two approaches do you recommend?
>
> Best wishes,
>
> Rafael.
> Em dom, 10 de mar de 2019 10:40, Viechtbauer, Wolfgang (SP) <
> wolfgang.viechtbauer using maastrichtuniversity.nl> escreveu:
> Dear Rafael,
>
> Let's try this again (instead of sending an empty mail -- sorry about
> that!).
>
> Indeed, the results differ because model2 estimates the variance
> components only based on the subset, while model1 estimates those variances
> based on all data. You would have to allow the variance components to
> differ for the "no" and "yes" levels of 'potential_sce' in 'model1' for the
> results to be identical. Actually, even then, I don't think you would get
> the exact same results, since you make use of the 'R' argument. Due to the
> correlation across species, the estimate (and SE) of 'potential_sceno' and
> 'potential_sceno' will be influenced by whatever species are included in
> the dataset. In the subset, certain species are not included (240 instead
> of 348), which is another reason why there are differences.
>
> Best,
> Wolfgang
>
> -----Original Message-----
> From: Michael Dewey [mailto:lists using dewey.myzen.co.uk]
> Sent: Thursday, 07 March, 2019 18:06
> To: Rafael Rios; Viechtbauer, Wolfgang (SP);
> r-sig-meta-analysis using r-project.org
> Subject: Re: [R-meta] Different outputs by comparing random-effects model
> with a MLMA without intercept
>
> Dear Rafael
>
> I think this may be related to the issue outlined by Wolfgang in this
> section of the web-site
>
> http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
>
> Michael
>
> On 07/03/2019 16:46, Rafael Rios wrote:
> > Dear Wolfgang and All,
> >
> > I am conducting a meta-analysis to evaluate potential bias of a fixed
> > predictor with two subgroups (predictor: yes and no). Because I found a
> > bias, I removed the values of subgroup "yes" and performed a
> random-effects
> > model. However, when I compared the output of the first model without
> > intercept with the output of the random effects model, I obtained
> different
> > results, especially in the estimation of confidence intervals. I was
> > expecting to found similar results because the model without intercept
> > tests if the average outcome differs from zero. Can you explain in which
> > case this can happen? Every help is welcome.
> >
> >
> > model1=rma.mv(yi, vi, mods=~predictor-1, random = list (~1|effectsizeID,
> > ~1|studyID, ~1|speciesID), R=list(speciesID=phylogenetic_correlation),
> > data=h)
> >
> > #Multivariate Meta-Analysis Model (k = 1850; method: REML)
> > #
> > #Variance Components:
> > #                    estim     sqrt      nlvls  fixed        factor     R
> > #sigma^2.1  0.0145  0.1204   1850    no  effectsizeID   no
> > #sigma^2.2  0.0195  0.1397    468     no       studyID     no
> > #sigma^2.3  0.2386  0.4885    348     no     speciesID   yes
> > #
> > #Test for Residual Heterogeneity:
> > #QE(df = 1848) = 10797.5993, p-val < .0001
> > #
> > #Test of Moderators (coefficients 1:2):
> > #QM(df = 2) = 17.6736, p-val = 0.0001
> > #
> > *#Model Results:*
> > *#                          estimate      se        zval       pval
> > ci.lb <http://ci.lb>   ci.ub *
> > *#potential_sceno     0.2843  0.1659  1.7141  0.0865  -0.0408  0.6095  *.
> > #potential_sceyes    0.3741  0.1668  2.2421  0.0250   0.0471  0.7011  *
> > #---
> > #Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> >
> >
> > model2=rma.mv(zf, vzf, random = list (~1|effectsizeID, ~1|studyID,
> > ~1|speciesID), R=list(speciesID=phylogenetic_correlation),
> > data=subset(h,potential_sce=="no"))
> >
> > #Multivariate Meta-Analysis Model (k = 1072; method: REML)
> > #
> > #Variance Components:
> > #            estim    sqrt  nlvls  fixed        factor    R
> > #sigma^2.1  0.0140  0.1184   1072     no  effectsizeID   no
> > #sigma^2.2  0.0394  0.1986    264     no       studyID   no
> > #sigma^2.3  0.0377  0.1943    240     no     speciesID  yes
> > #
> > #Test for Heterogeneity:
> > #Q(df = 1071) = 4834.5911, p-val < .0001
> > #
> > *#Model Results:*
> > *#estimate      se    zval    pval   ci.lb <http://ci.lb>   ci.ub *
> > *#  0.2989  0.0720  4.1494  <.0001  0.1577  0.4401  *** *
> > #---
> > #Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> >
> >
> > I used another data set to conduct the same approach and obtained similar
> > results:
> >
> > dat <- dat.bangertdrowns2004
> > rbind(head(dat, 10), tail(dat, 10))
> > dat <- dat[!apply(dat[,c("length", "wic", "feedback", "info", "pers",
> > "imag", "meta")], 1, anyNA),]
> >
> > head(dat)
> >
> > random.model=rma.mv(yi, vi, random=list(~1|id, ~1|author),
> structure="UN",
> > data=subset(dat, subject=="Math"))
> >
> > random.model
> >
> > *#Math*
> > *#Model Results:*
> > *#  estimate      se    zval         pval   ci.lb <http://ci.lb>
>  ci.ub *
> > *#    0.2106  0.0705  2.9899  0.0028  0.0726  0.3487  ***
> >
> > mixed.model=rma.mv(yi, vi, mods=~subject-1, random=list(~1|id,
> ~1|author),
> > structure="UN", data=dat)
> >
> > anova(mixed.model,btt=2)
> >
> > *#Math*
> > *#  estimate      se       zval    pval       ci.lb <http://ci.lb>
>  ci.ub*
> > *#    0.2100  0.0697   3.0122  0.0026   0.0734  0.3467*
> >
> > Best wishes,
> >
> > Rafael.
> > __________________________________________________________
> >
> > Dr. Rafael Rios Moura
> > *scientia amabilis*
> >
> > Behavioral Ecologist, Ph.D.
> > Postdoctoral Researcher
> > Universidade Estadual de Campinas (UNICAMP)
> > Campinas, São Paulo, Brazil
> >
> > ORCID: http://orcid.org/0000-0002-7911-4734
> > Currículo Lattes: http://lattes.cnpq.br/4264357546465157
> > Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190313/80a7f8c2/attachment-0001.html>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: data_without_psce.csv
Type: application/vnd.ms-excel
Size: 13521 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190313/80a7f8c2/attachment-0002.xlb>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: tree_full_data.tre
Type: application/octet-stream
Size: 1108 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190313/80a7f8c2/attachment-0003.obj>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: full_data.csv
Type: application/vnd.ms-excel
Size: 14340 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190313/80a7f8c2/attachment-0003.xlb>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: tree_data_without_psce.tre
Type: application/octet-stream
Size: 943 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190313/80a7f8c2/attachment-0004.obj>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: script_model.R
Type: application/octet-stream
Size: 694 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190313/80a7f8c2/attachment-0005.obj>


More information about the R-sig-meta-analysis mailing list