Aaditya Ramdas (Berkeley)
Apr 10, 2017.
Title and Abstract
Decentralized decision making on networks with false discovery rate control
The field of distributed computation, learning, testing and inference on graphs has witnessed large advances in theory and wide adoption in practice. However, there do not currently exist any methods for multiple hypothesis testing on graphs that are equipped with provable guarantees on error metrics like the false discovery rate (FDR). In this talk, we consider a novel but natural setting where distinct agents reside on the nodes of an undirected graph, and each agent possesses p-values corresponding to one or more hypotheses local to its node. Each agent must individually decide whether to reject one or more local hypotheses by only communicating with its neighbors, with the joint aim that the global FDR over the entire graph must be controlled at a predefined level. We propose a simple decentralized family of Query-Test-Exchange (QuTE) message-passing algorithms that interpolate smoothly between the two extremes — with zero communication budget, QuTE reduces to a simply Bonferroni correction, and with an unbounded communication budget or a centralized server node, QuTE reduces to the classical Benjamini-Hochberg (BH) procedure. Our main theoretical result is that the overall FDR is controlled at any time during the dynamic communication process, and that the power increases monotonically with communication, achieving the power of the gold-standard BH procedure after a time equaling the graph diameter. We explain how to deal with quantization issues involved in communicating real p-values, by introducing a new concept called p-ranks. We study the power of our procedure using a simulation suite of different levels of connectivity and communication on a variety of graph structures, and also provide an illustrative real-world example on an indoor sensor network.
(joint work with Jianbo Chen, Martin Wainwright, Michael Jordan)
Bio
Aaditya Ramdas is a postdoctoral researcher in Statistics and EECS at UC Berkeley, advised by Michael Jordan and Martin Wainwright. He finished his PhD in Statistics and Machine Learning at CMU, advised by Larry Wasserman and Aarti Singh. A central thread of his research focuses on modern aspects of reproducibility in science and technology, involving statistical testing and false discovery rate control in static and dynamic settings. He also has recent work in aspects of optimization, high-dimensional statistical learning and sequential decision problems
|