[R-meta] effect size estimates distribution and field-specific benchmarks

Lukasz Stasielowicz |uk@@z@@t@@|e|ow|cz @end|ng |rom un|-o@n@brueck@de
Fri Dec 15 13:39:58 CET 2023


Dear Yefeng,


Effect size benchmarks is an interesting topic.

I will mention one potential question related to the third approach you 
described: Is it really a realistic assumption that the effect sizes are 
normally distributed?

The question follows from the work of Bosco and colleagues, e.g.
Bosco, F. A., Aguinis, H., Singh, K., Field, J. G., & Pierce, C. A. 
(2015). Correlational effect size benchmarks. Journal of Applied 
Psychology, 100(2), 431–449. https://doi.org/10.1037/a0038047

In one table (p. 436), they report percentiles for the distribution of 
effect sizes, which are based on approximately 150,000 effect sizes.
If we compare the distance between the 20th, 50th, and 80th percentiles, 
then it seems that the distribution is not symmetrical:
20th percentile vs. 50 percentile: r = .05 vs. r = .16 [Diff = .11]
80th percentile vs. 50 percentile: r = .36 vs. r = .16 [Diff = .20]
A similar pattern can be seen if we divide effect sizes into categories 
(e.g., construct type).

Of course, it is possible that the effects are distributed normally in 
your dataset or that you have already thought about the normality 
assumption. I just wanted to point out potential questions.

Another issue that comes to mind pertains to the Bayesian analysis. The 
choice of the prior distribution (t, normal, Cauchy, etc.) might 
influence the shape of the posterior distribution. Therefore, one would 
need a strong justification for testing only one prior. One possible 
solution is to conduct sensitivity analyses with different distributions 
and parameter values (mean, sd, etc.). Similar posterior distributions 
would indicate that the results are relatively robust.




Best,
Lukasz
-- 
Lukasz Stasielowicz
Osnabrück University
Institute for Psychology
Research methods, psychological assessment, and evaluation
Lise-Meitner-Straße 3
49076 Osnabrück (Germany)
Twitter: https://twitter.com/l_stasielowicz
Tel.: +49 541 969-7735



On 15.12.2023 12:00, r-sig-meta-analysis-request using r-project.org wrote:
> Send R-sig-meta-analysis mailing list submissions to
> 	r-sig-meta-analysis using r-project.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> or, via email, send a message with subject or body 'help' to
> 	r-sig-meta-analysis-request using r-project.org
> 
> You can reach the person managing the list at
> 	r-sig-meta-analysis-owner using r-project.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of R-sig-meta-analysis digest..."
> 
> 
> Today's Topics:
> 
>     1. effect size estimates distribution and field-specific
>        benchmarks (Yefeng Yang)
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Thu, 14 Dec 2023 23:40:30 +0000
> From: Yefeng Yang <yefeng.yang1 using unsw.edu.au>
> To: "r-sig-meta-analysis using r-project.org"
> 	<r-sig-meta-analysis using r-project.org>
> Subject: [R-meta] effect size estimates distribution and
> 	field-specific benchmarks
> Message-ID:
> 	<SYCPR01MB54233D332FA8131F3559A0AA9D8CA using SYCPR01MB5423.ausprd01.prod.outlook.com>
> 	
> Content-Type: text/plain; charset="utf-8"
> 
> Dear community,
> 
> I have a question about the effect size distribution. It would be great if you would like to share your wisdom or just comment on it.
> 
> I briefly describe my question as follows:
> 
> I have a collection of effect size estimates of a specific field, say using SMD. Somehow, the dataset is free of publication bias. Now I want to derive the empirical benchmarks to inform the magnitude of the effect size estimates.
> I am clear about the pitfalls of using empirical benchmarks because the interpretation of effect size should be specific to the context of the question/field. But for now, let's discuss the technical approaches to get reliable benchmarks.
> 
> At the moment, I am using four approaches:
> 
>    1.  using the empirical distribution to get the relevant percentiles, say 25, 50, 75th
>    2.  using the mixture model to approximate the distribution and get relevant percentiles
>    3.  fitting a meta-analysis model to get mean and variance, and then recover the normal distribution to get the relevant percentiles
>    4.  fitting a Bayesian MA and get the posterior distribution
> 
> My purpose is to see whether different approaches converge in terms of the benchmarks (or more precisely, percentiles). Do you have any other approaches? General comments or suggestions are also welcome.
> 
> Regards,
> Yefeng
> 
> 	[[alternative HTML version deleted]]
> 
> 
> 
> 
> ------------------------------
> 
> Subject: Digest Footer
> 
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> 
> 
> ------------------------------
> 
> End of R-sig-meta-analysis Digest, Vol 79, Issue 20
> ***************************************************



More information about the R-sig-meta-analysis mailing list