[R-meta] (no subject)
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Fri Jun 18 10:40:26 CEST 2021
Please see my responses below.
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Garance Delagneau
>Sent: Friday, 18 June, 2021 2:53
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] (no subject)
>I'm a bit stuck and would really appreciate any help on my issue.
>I'm doing a meta analysis (using R). There are several instances where authors
>reported multiple effect sizes (e.g., reported effect sizes for different
>timepoints) that I need to combine. I've tried to aggregate my multiple effect
>sizes using both the metafor package and the formula in Borenstein's manual
>(chapter 24 - using the mean effect size weighted according to the sample size and
>the formula attached to this email to calculate the variance).
The equation you showed assumes that an *unweighted* average is taken of the two effect sizes. So if you computed a weighted mean, then this equation is not correct.
>using these two techniques are quite similar, the computed effect sizes are very
>My questions are:
>• Why/how does yi (combined effect size) change quite a lot based on the value of
>rho when using the metafor package?
I assume you are talking about the aggregate() function and you are using something like:
aggregate(dat, cluster=dat$study, struct="CS", rho=<>)
The function by default computes weighted averages of the effects within studies (based on the variance-covariance matrix of the effects, which is constructed based on the sampling variances and the assumed value of rho). When rho changes, the var-cov matrix changes and hence the weighted averages change. You can also use weighted=FALSE in which case unweighted averages are computed and then rho does not affect these averages (although it still affects the variances of the computed averages).
>• Are the yi's that we get when using the metafor package correct?
I think so, but my opinion on this matter might be biased :) You can inspect the code of the function here: https://github.com/wviechtb/metafor/blob/master/R/aggregate.escalc.r If you find any mistakes/errors, please let me know!
>• The combined effect sizes using these methods are quite different from using the
>mean effect size. Is it correct to use the Metafor package?
>This is the example I've been working on
>Authors N Time corr
>Polanska 2017 337 2 -0.09
>Polanska 2017 219 1 -0.02
>Using R's metafor package, I obtained a combined effect size of -0.0718. Using
>Borenstein's method, I obtain an effect size of -0.06255.
Please provide a fully reproducible example. I had to guess what exactly you did with metafor, but it might have been this:
dat <- data.frame(study=1, ni=c(337,219), ri=c(-.09,-.02))
dat <- escalc(measure="COR", ri=ri, ni=ni, data=dat)
aggregate(dat, cluster=dat$study, struct="CS", rho=0.555)
At least this yields yi=-0.0718.
aggregate(dat, cluster=dat$study, struct="CS", rho=0.555, weighted=FALSE)
gives an unweighted average of -0.0550 (following Borenstein). Not sure what you did but
gives -0.06242806 which is close to -0.06255 but not identical (and again this is not what Borenstein suggests).
>Note. I often have fewer than 10 articles to combine in my meta-analyses (it
>varies between 3 and 10). I expect heterogeneity to be moderate to high in most of
>Thank you very much,
>PhD Student (Clinical Neuropsychology)
>M: 0452 323 762
>E: garance.delagneau using monash.edu
More information about the R-sig-meta-analysis