Dylan Foster (Microsoft Research)April 27, 2022 Title and AbstractThe Statistical Complexity of Interactive Decision Making A fundamental challenge in interactive learning and decision making, ranging from bandit problems to reinforcement learning, is to provide sample-efficient, adaptive learning algorithms that achieve near-optimal regret. This question is analogous to the classical problem of optimal (supervised) statistical learning, where there are well-known complexity measures (e.g., VC dimension and Rademacher complexity) that govern the statistical complexity of learning. However, characterizing the statistical complexity of interactive learning is substantially more challenging due to the adaptive nature of the problem. In this talk, we will introduce a new complexity measure, the Decision-Estimation Coefficient, which is necessary and sufficient for sample-efficient interactive learning. In particular, we will provide: 1. a lower bound on the optimal regret for any interactive decision making problem, establishing the Decision-Estimation Coefficient as a fundamental limit. 2. a unified algorithm design principle, Estimation-to-Decisions, which attains a regret bound matching our lower bound, thereby achieving optimal sample-efficient learning as characterized by the Decision-Estimation Coefficient. Taken together, these results give a theory of learnability for interactive decision making. When applied to reinforcement learning settings, the Decision-Estimation Coefficient recovers essentially all existing hardness results and lower bounds. BioDylan Foster is a senior researcher at Microsoft Research, New England. Previously, he was a postdoctoral fellow at MIT, and received his PhD in computer science from Cornell University, advised by Karthik Sridharan. His research focuses on problems at the intersection of machine learning and decision making, with a recent emphasis on reinforcement learning. He has received several awards, including the best paper award at COLT (2019) and best student paper award at COLT (2018, 2019). |