About Me

I am a PhD candidate in the Computer Science Department at Duke University, supervised by Prof. Cynthia Rudin. My primary research interest is the interpretablility of machine learning models. I'm also the first group of students joining the aiM-NRT program, where graduate students are trained to create AI algorithms for understanding and designing materials. I worked as a research intern at Microsoft Research, where I was mentored by Rich Caruana . I completed my B.S. in Computer Science at Kuang Yaming Honors School, Nanjing University in 2018.

More information can be found in my CV.

Research

My research is primarily focus on the interpretable machine learning, especially building inherently interpretable models. This survey paper I wrote with my advisor and other colleagues in our lab summarizes important technical challenge areas in building inherently interpretable machine learning models. Some of my recent research projects are listed here.

Concept-based Interpretable Neural Networks

We develop a module, concept whitening (CW), to decorrelate and align the axes of the latent space to predefined concepts. CW can provide a much clearer understanding for how the network gradually learns concepts over layers without hurting predictive performances. We are developing methods that discover useful domain-specific concepts in an unsupervised way, and explicitly represent these discovered concepts in the latent space of neural networks.

Interpretable Machine Learning for Metamaterials Designs

We develop interpretable machine learning methods to discover key local and global features related to important dynamic material properties such as mechanical band gaps. These physically interpretable features can also transfer information about material properties across scale.

Discover Common Flaws in Data Using Interpretable Models

Every dataset is flawed, often in surprising ways that data scientists might not anticipate. We show how interpretable machine learning methods such as EBMs can help users detect problems that are lurking in their data. Specifically, we provide a number of case studies, where EBM discovers various types of common dataset flaws, including missing values, confounders, data drift, bias and fairness, and outliers. We also demonstrate that in some cases interpretable learning methods such as EBMs provide simple tools for correcting problems when correcting the data is difficult in other ways.

Face Super Resolution via Latent Space Searching

We are developing face super-resolution framework, based on searching in the latent space that approximates a natural image manifold, to create super-resolved images that are realistic and qualitatively resemble the target identity.

Publications

Using Explainable Boosting Machines (EBMs) to Detect Common Flaws in Data, ECML-PKDD International Workshop and Tutorial on eXplainable Knowledge Discovery in Data Mining (XKDD), 2021.

Zhi Chen, Sarah Tan, Harsha Nori, Kori Inkpen, Yin Lou, Rich Caruana

Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges, accepted, Statistics Surveys, 2021.

Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong

Concept Whitening for Interpretable Image Recognition, Nature Machine Intelligence, 2020.

Zhi Chen, Yijie Bei, Cynthia Rudin

Adversarial Feature Matching for Text Generation, Proceedings of the International Conference on Machine Learning (ICML), 2017.

Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, Lawrence Carin

How to See Hidden Patterns in Metamaterials with Interpretable Machine Learning, under review, 2021.

Zhi Chen, Alex Ogren, Chiara Daraio, Cate Brinson, Cynthia Rudin