[Statlist] FDS Seminar talk with Nathan Kallus, Cornell University - 23 September 2021, 16:15-17:15 CEST

Maurer Letizia |et|z|@m@urer @end|ng |rom ethz@ch
Mon Sep 13 12:10:43 CEST 2021


We are pleased to announce the following physical talk in our ETH Foundations of Data Science Seminar series

„Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes“
by Nathan Kallus, Cornell University

Date and Time: Thursday, 23 September 2021, 16:15-17:15 CEST 
Place: ETH Zurich, HG F 3

Abstract: „Contextual bandit problems are the primary way to model the inherent tradeoff between exploration and exploitation in dynamic personalized decision making in healthcare, marketing, revenue management, and beyond. Naturally, the tradeoff (that is, the optimal rate of regret) depends on how complex the underlying learning problem is -- how much can observing reward in one context tell us about mean rewards in another -- but this obvious-seeming relationship is not supported by the current theory. To characterize it more precisely we study a nonparametric contextual bandit problem where the expected reward functions belong to a Hölder class with smoothness parameter β (roughly meaning they are β-times differentiable). We show how this interpolates between  two extremes that were previously studied in isolation: non-differentiable bandits (β ≤ 1), where rate-optimal regret is achieved by running separate non-contextual bandits in different context regions, and parametric-response bandits (β = ∞), where rate-optimal regret can be achieved with minimal or no exploration due to infinite extrapolatability from one context to another. We develop a novel algorithm that carefully adjusts to any smoothness setting in between and we prove its regret is rate-optimal by establishing matching upper and lower bounds, recovering the existing results at the two extremes. In this sense, our work bridges the gap between the existing literature on parametric and nondifferentiable contextual bandit problems and between bandit algorithms that exclusively use global or local information, shedding light on the crucial interplay of complexity and regret in dynamic decision making.

Paper:
https://arxiv.org/abs/1909.02553"

Organisers: A. Bandeira, H. Bölcskei, P. Bühlmann, F. Yang
Seminar website: https://math.ethz.ch/sfs/news-and-events/data-science-seminar.html

IMPORTANT INFORMATION:

We are glad that this talk can take place physically again. Please take note of the Covid certificate and mask requirements for joining this lecture. Further details can be found here, https://ethz.ch/services/en/news-and-events/internal-news/archive/2021/09/covid-certificate-requirement-in-all-lectures.html.


More information about the Statlist mailing list