Corelab Seminar

Gautam Kamath
Privately Learning High-Dimensional Distributions

We present novel, computationally efficient, and differentially private algorithms for two fundamental high-dimensional learning problems: learning a multivariate Gaussian in R^d and learning a product distribution in {0,1}^d in total variation distance. The sample complexity of our algorithms nearly matches the sample complexity of the optimal non-private learners for these tasks in a wide range of parameters. Thus, our results show that private comes essentially for free for these problems, providing a counterpoint to the many negative results showing that privacy is often costly in high dimensions. Our algorithms introduce a novel technical approach to reducing the sensitivity of the estimation procedure that we call recursive private preconditioning, which may find additional applications.
Joint work with Jerry Li, Vikrant Singhal, and Jonathan Ullman. Preprint available here:

Biography: Gautam Kamath is an assistant professor at the David R. Cheriton School of Computer Science at the University of Waterloo. He has a B.S. in Computer Science and Electrical and Computer Engineering from Cornell University, and an M.S. and Ph.D. in Computer Science from the Massachusetts Institute of Technology, where he was advised by Constantinos Daskalakis. His research interests lie in theoretical computer science and principled tools for statistical data science, particularly concerns such as high-dimensional data, robustness, and privacy. He was a Microsoft Research Fellow, as a part of the Simons-Berkeley Research Fellowship Program at the Simons Institute for the Theory of Computing. He was awarded the Best Student Presentation Award at the ACM Symposium on Theory of Computing in 2012.