[R-meta] Binomial Effect Size Display?
Viechtbauer Wolfgang (SP)
wolfgang.viechtbauer at maastrichtuniversity.nl
Wed Jul 19 10:13:20 CEST 2017
I don't have the time to really dig into this and since I hardly see any use of the BESD in practice, it seems like a non-issue to me. But this article seems highly pertinent:
Hsu, L. M. (2004). Biases of success rate differences shown in binomial effect size displays. Psychological Methods, 9(2), 183-197.
Some criticisms are also discussed in:
Randolph, J. J., & Edmondson, R. S. (2005). Using the Binomial Effect Size Display (BESD) to present the magnitude of effect sizes to the evaluation audience. Practical Assessment, Research & Evaluation, 10(14).
Wolfgang Viechtbauer, Ph.D., Statistician | Department of Psychiatry and
Neuropsychology | Maastricht University | P.O. Box 616 (VIJV1) | 6200 MD
Maastricht, The Netherlands | +31 (43) 388-4170 | http://www.wvbauer.com
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Mark White
Sent: Saturday, July 15, 2017 15:36
To: r-sig-meta-analysis at r-project.org
Subject: [R-meta] Binomial Effect Size Display?
Many meta-analyses will take their smaller-than-they-would-have-hoped
summary effect size and make it look bigger by using Rosenthal and Rubin's
(1982) binomial effect size display:
have always thought this is a misleading metric. Someone asked a question 5
years ago on CrossValidated about it, and I tried to answer it with a
(this post provides a good, short summary of the original paper and the
metric, if you are unfamiliar with it).
I'm more of a simulate-and-see-if-it-works type of person—is there anyone
who understands more of the math behind *why* this metric might be
More information about the R-sig-meta-analysis