Steve Hanneke (TTIC)

Sep 18, 2020

Title and Abstract

Multi-task Learning: Optimal Rates and a No-Free-Lunch Theorem

Multitask learning and related areas such as multi-source domain adaptation address modern settings where datasets from N related distributions {Pt} are to be combined towards improving performance on any single such distribution D. A perplexing fact remains in the evolving theory on the subject: while we would hope for performance bounds that account for the contribution from multiple tasks, the vast majority of analyses result in bounds that improve at best in the number n of samples per task, but most often include terms that do not improve as the number of tasks N grows. As such, it might seem at first that the distributional settings or aggregation procedures considered in such analyses might be somehow unfavorable; however, as we show, the picture happens to be more nuanced, with interestingly hard regimes that might appear otherwise favorable. In particular, we consider a seemingly favorable classification scenario where all tasks Pt share a common optimal classifier h* in a given function class of finite VC dimension, and which can be shown to admit a broad range of regimes with improved oracle rates in terms of N and n. Some of our main results are as follows:

  • We show that, even though such regimes admit minimax rates accounting for both n and N, no adaptive algorithm exists; that is, without access to distributional information, no algorithm can guarantee rates that improve with large N for n fixed.

  • With a bit of additional information, namely, a ranking of tasks {P_t} according to their relevance to a target task D, a simple rank-based procedure can achieve near-optimal excess risk guarantees, which improve with both n and N. Interestingly, the optimal aggregation may exclude data from some tasks, even though they all have the same optimal classifier h*.

Based on joint work with Samory Kpotufe.

Bio

Steve Hanneke is a Research Assistant Professor at the Toyota Technological Institute at Chicago. His research explores the theory of machine learning, with a focus on reducing the number of training examples sufficient for learning. His work develops new approaches to supervised, semi-supervised, active, and transfer learning, and also revisits the basic probabilistic assumptions at the foundation of learning theory. Steve earned a Bachelor of Science degree in Computer Science from UIUC in 2005 and a Ph.D. in Machine Learning from Carnegie Mellon University in 2009 with a dissertation on the theoretical foundations of active learning