Research: Papers and Code


My research focuses on developing interpretable machine learning models that help people make more informed decisions. In particular, my research aims to develop efficient approaches to learning inherently interpretable models (e.g. rule lists, prototype models), and to add interpretability to some existing “black box” models (e.g. neural networks).

Papers and Code

This Looks Like That: Deep Learning for Interpretable Image Recognition. NeurIPS, 2019.

Chaofan Chen, Oscar Li (co-first author), Alina Barnett, Jonathan Su and Cynthia Rudin.

Interpretable Image Recognition with Hierarchical Prototypes. AAAI-HCOMP, 2019.

Peter Hase, Chaofan Chen, Oscar Li and Cynthia Rudin.

An Interpretable Model with Globally Consistent Explanations for Credit Risk. NIPS Workshop on Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy, 2018.

Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia Wang, Tong Wang.

  • Winner of the FICO Recognition Award for the FICO Explainable Machine Learning Challenge, 2018.

An Optimization Approach to Learning Falling Rule Lists. AISTATS, 2018. (code)

Chaofan Chen and Cynthia Rudin.

Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions. AAAI, 2018.

Oscar Li, Hao Liu, Chaofan Chen and Cynthia Rudin.