Wen Sun (Cornell University)

October 17, 2022

Title and Abstract

Hybrid RL: Using both offline and online data can make RL efficient

We consider a hybrid reinforcement learning setting (Hybrid RL), in which an agent has access to an offline dataset and the ability to collect experience via real-world online interaction. The framework mitigates the challenges that arise in both pure offline and online RL settings, allowing for the design of simple and highly effective algorithms, in both theory and practice. We demonstrate these advantages by adapting the classical Q learning  / iteration algorithm to the hybrid setting, which we call Hybrid Q-Learning or Hy-Q. In our theoretical results, we prove that the algorithm is both computationally and statistically efficient whenever the offline dataset supports a high-quality policy and the environment has bounded bilinear rank. Notably, we require no assumptions on the coverage provided by the initial distribution, in contrast with guarantees for policy gradient / iteration methods. In our experimental results, we show that Hy-Q with neural network function approximation outperforms state-of-the-art online, offline, and hybrid RL baselines on challenging benchmarks, including Montezuma’s Revenge.

This is a joint work with Yuda Song, Yifei Zhou, Ayush Sekhari, J. Andrew Bagnell, and Akshay Krishnamurthy.

Bio

Wen Sun is an assistant professor in the CS department at Cornell University. His research is in machine learning and Reinforcement Learning (RL), and much of his work focuses on designing algorithms for efficient sequential decision making, understanding exploration and exploitation tradeoff, and leveraging expert demonstrations to overcome exploration. He received a Ph.D. in Robotics from Carnegie Mellon University in 2019. During the 2019-20 academic year, he was a postdoctoral researcher at Microsoft Research in New York City