Home / Research

Representation Learning

Representation Learning deals with the problem of finding transformations of data that facilitate analysis and reveal the relevant structure. I am interested in posing and analyzing theoretical models for representation learning in various domains, as well as deriving principled insights and algorithms that can be used on real-world data.

Word embedding algorithms assign a numerical vector to each word in a vocabulary, with the goal of capturing the semantic meaning of text in the geometry of the vector space. The most common and well-understood algorithms rely on simple pairwise co-occurence statistics between words, and don't explicitly make use of richer structural information such as syntactic relationships. My work on syntax-aware composition word embeddings addresses this limitation by introducing a model for word embeddings that captures syntactic dependencies and leads to a principled learning algorithm based on tensor decomposition.
Many problems in representation learning can be posed mathematically as a low-rank tensor decomposition (including my work on syntax-aware compositional word embeddings). One important and general class is the Tucker Decomposition, which factorizes the tensor into a lower-dimensional core tensor as well as low-dimensional factor matrices. While iterative methods exist for computing the Tucker decomposition, the natural least-squares optimization formulation for this problem is non-convex and therefore local search methods are not well-understood. To rectify this, I characterize the optimization landscape for Tucker decomposition and give an efficient and provably correct local search algorithm to find a global minimum (paper currently in submission).
Many control problems involve highly nonlinear dynamics and observations that are noisy or extraneous. On the other hand, the class of linear control systems are well-understood theoretically and admit robust and efficient algorithms for optimal control. My work in this area seeks to find transformations of the state space so that the dynamics become tractable, e.g. linear. I pose and study models of control problems in which such linearizing transformations exist, and develop state representation learning algorithms based on the insights derived from these models.

Machine Learning for Healthcare Data

Healthcare poses compelling and challenging problems for machine learning, due to the large diversity of data modalities and the high stakes of the learning tasks. Insurance claims are a source of particularly rich seqential data that capture the health state of individuals over time. In my Master's work, I developed machine learning techniques to utilize claims data (in particular, ICD-9/10 codes) in order to predict the onset of chronic diseases such as Type II Diabetes and Chronic Heart Disease.