Statistical Problems and Convex Relaxations: Dreams of a General Theory
Presented By: Tselil Schramm, Harvard University
In practice, the algorithmic problems we want to solve are rarely "worst-case". Upon closer inspection, the difficulty of these problems depends on properties of the input data: when the data contains less information, the problem becomes harder, and we require more computational resources to solve it. Characterizing this relationship between information and computation is a central question in machine learning, cryptography and more. Statistical models are an avenue for studying this question, and for building a theory that better supports computational endeavors: rather than design algorithms for the worst case, we assume that inputs come from a structured distribution.
In this talk, I will describe my work towards building a theory of computation for statistical models using convex relaxations (and particularly the powerful sum-of-squares algorithm). I'll talk about my efforts in understanding the tradeoff between computation and statistical information, in giving fast (often linear- or almost-linear-time) algorithms based on slower semidefinite programs, and in making a broad connection between convex programs and spectral algorithms for statistical problems.
Tselil Schramm is a postdoc in theoretical computer science at Harvard and MIT, hosted by Boaz Barak, Jon Kelner, Ankur Moitra, and Pablo Parrilo. She obtained her PhD in computer science from U.C. Berkeley under the advisement of Prasad Raghavendra and Satish Rao, and was supported by a NSF Graduate Research Fellowship. She spent Fall 2017 as the Google Research Fellow at the Simons Institute program on optimization. Her research interests include statistical and average-case problems, optimization via convex programs (especially the sum-of-squares algorithm), spectral algorithms, and more.
This is a joint seminar with the Departments of Computer Science and Mathematics
For more information, contact Eyal deLara (email@example.com)