OPL: Optimal Policy Learning

Provides functions for optimal policy learning in socioeconomic applications helping users to learn the most effective policies based on data in order to maximize empirical welfare. Specifically, 'OPL' allows to find "treatment assignment rules" that maximize the overall welfare, defined as the sum of the policy effects estimated over all the policy beneficiaries. Documentation about 'OPL' is provided by several international articles via Athey et al (2021, <doi:10.3982/ECTA15732>), Kitagawa et al (2018, <doi:10.3982/ECTA13288>), Cerulli (2022, <doi:10.1080/13504851.2022.2032577>), the paper by Cerulli (201, <doi:10.1080/13504851.2020.1820939>) and the book by Gareth et al (2013, <doi:10.1007/978-1-4614-7138-7>).

Version: 1.0.0
Imports: stats, dplyr, ggplot2, pander, randomForest, tidyr
Suggests: knitr, rmarkdown
Published: 2025-02-03
Author: Federico Brogi [aut, cre], Barbara Guardabascio [aut], Giovanni Cerulli [aut]
Maintainer: Federico Brogi <federicobrogi at gmail.com>
License: GPL-3
NeedsCompilation: no
CRAN checks: OPL results

Documentation:

Reference manual: OPL.pdf
Vignettes: make_cate (source, R code)
opl_dt_c (source, R code)
opl_lc_c (source, R code)
opl_tb_c (source, R code)
overlapping (source, R code)

Downloads:

Package source: OPL_1.0.0.tar.gz
Windows binaries: r-devel: not available, r-release: not available, r-oldrel: not available
macOS binaries: r-release (arm64): OPL_1.0.0.tgz, r-oldrel (arm64): not available, r-release (x86_64): OPL_1.0.0.tgz, r-oldrel (x86_64): not available

Linking:

Please use the canonical form https://CRAN.R-project.org/package=OPL to link to this page.