New Challenges in Machine Learning - Robustness and Nonconvexity
Organizers: Ilias Diakonikolas (email@example.com), Rong Ge (firstname.lastname@example.org), Ankur Moitra (email@example.com)
Machine learning has gone through a major transformation in the last decade. Traditional methods based on convex optimization have been replaced by highly non-convex approaches including deep learning. In the worst-case, the underlying optimization problems are NP-hard. Therefore to understand their success, we need new tools to characterize properties of natural inputs, and design algorithms that work provably in beyond-worst-case settings. In particular, robustness and nonconvexity are two of the major challenges.
Robustness: When we design provable learning algorithms, usually their performance is very brittle to the model assumptions. Can we design provably robust algorithms? How can we find outliers in high dimensions? And are there interesting theoretical questions awaiting in more modern issues in robustness, such as adversarial machine learning and generative adversarial nets?
Nonconvexity: When and why can we solve nonconvex optimization problems in high dimensions? Recent works show that it is possible to escape saddle points and get to a local minimum, and for several problems all local minima are as good as global minimum. How can we extend these results to a richer set of problems? Especially, what is deficient about the models that have been studied so far in explaining what is actually going on in deep nets?
The workshop is on Friday June 23rd. The morning session is 9:00 - 12:00; the afternoon session is 1:00 - 4:00.
Call for Posters
We invite posters to be presented during the workshop. We solicit posters in all areas of machine learning, including unpublished work and recently published work outside FOCS/STOC. To submit a poster, please email firstname.lastname@example.org with a link/attachment to your paper or a 2-page abstract.