Home Research Publications Teaching

Research
I am broadly interested in theoretical computer science and machine learning. Most of my research is in one of the following directions
• Representation Learning: Modern machine learning algorithms such as deep learning try to automatically learn useful hidden representations of documents, images or other forms of data. We try to find out what structures are useful, and how to design provable algorithms for these hidden structures.
• Non-convex Optimization: Most of the machine learning problems can be formalized as non-convex optimization problems. However, in the worst case non-convex optimization is NP-hard. We try to identify properties of the problems that make them easy'' to solve, and design more efficient algorithms.
• Tensor Decompositions: Tensors are high order generalizations of matrices. Tensor decomposition is a powerful algebraic tool that can be used to learn many latent variable models. We try to design faster, more robust algorithms for tensor decomposition, and apply them to new problems.
I have also worked on other topics such as approximation algorithms and complexity of financial derivatives.
Representation Learning
In representation learning, we have worked on algorithms for problems such as topic models, sparse coding, social networks and noisy-or networks. I hope to work on representation learning for more applications, including deep representations.