Kangwook Lee (University of Wisconsin-Madison)

Feb 18, 2022

Title and Abstract

Improving Fairness via Federated Learning

Recently, lots of algorithms have been proposed for learning a fair classifier from decentralized data. However, many theoretical and algorithmic questions remain open. First, is federated learning necessary, i.e., can we simply train locally fair classifiers and aggregate them? In this work, we first propose a new theoretical framework, with which we demonstrate that federated learning can strictly boost model fairness compared with such non-federated algorithms. We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data. To bridge this gap, we propose FedFB, a state-of-the-art fair learning algorithm on decentralized data. The key idea is to modify the FedAvg protocol so that it can effectively mimic the centralized fair learning. Our experimental results show that FedFB significantly outperforms existing approaches, sometimes matching the performance of the centrally trained model.

Bio

Kangwook Lee is an Assistant Professor at the Electrical and Computer Engineering department and the Computer Sciences department (by courtesy) at University of Wisconsin-Madison. His research interests lie in trustworthy and scalable machine learning algorithms and systems using tools from information theory and coding