New Challenges in Machine Learning - Robustness and Nonconvexity

Organizers: Ilias Diakonikolas (diakonik@usc.edu), Rong Ge (rongge@cs.duke.edu), Ankur Moitra (moitra@mit.edu)

Machine learning has gone through a major transformation in the last decade. Traditional methods based on convex optimization have been replaced by highly non-convex approaches including deep learning. In the worst-case, the underlying optimization problems are NP-hard. Therefore to understand their success, we need new tools to characterize properties of natural inputs, and design algorithms that work provably in beyond-worst-case settings. In particular, robustness and nonconvexity are two of the major challenges.

Robustness: When we design provable learning algorithms, usually their performance is very brittle to the model assumptions. Can we design provably robust algorithms? How can we find outliers in high dimensions? And are there interesting theoretical questions awaiting in more modern issues in robustness, such as adversarial machine learning and generative adversarial nets?

Nonconvexity: When and why can we solve nonconvex optimization problems in high dimensions? Recent works show that it is possible to escape saddle points and get to a local minimum, and for several problems all local minima are as good as global minimum. How can we extend these results to a richer set of problems? Especially, what is deficient about the models that have been studied so far in explaining what is actually going on in deep nets?

Schedule

The workshop is on Friday June 23rd. The morning session is 9:00 - 12:00; the afternoon session is 1:00 - 4:00.

TimeSpeakerTitleSlides
9:00 - 9:30Ilias DiakonikolasEfficiency versus Robustness in High-Dimensional StatisticsSlides
9:40 - 10:10Po-Ling LohRobust high-dimensional linear regression: A statistical perspectiveSlides
10:20 - 10: 50Chris De SaAnalyzing Stochastic Gradient Descent for Some Non-Convex Problems
Slides
11:00 - 12:00Poster Session
12:00 - 1:00Lunch Break
1:00 - 1:30Sanjeev AroraHow to get into ML: A case study.
1:40 - 2:10Adam KlivansIdentifying Barriers for Learning Neural Networks
2:10 - 2:50Break
2:50 - 3:20John WrightNonconvex Recovery of Low-Complexity Models
3:30 - 4:00Aaron SidfordAccelerated Methods for Non-Convex Optimization


Call for Posters

We invite posters to be presented during the workshop. We solicit posters in all areas of machine learning, including unpublished work and recently published work outside FOCS/STOC. To submit a poster, please email stoc2017ml@gmail.com with a link/attachment to your paper or a 2-page abstract.

Important Dates

Submission Deadline: May 27, 2017
Notification of Acceptance: June 5, 2017
Workshop Date: June 23, 2017