I am a PhD candidate in the Computer Science Department at Duke University, supervised by Prof. Cynthia Rudin. I'm also working closely with Prof. Margo Seltzer and Prof. Cate Brinson on my research.
My primary research interest is the interpretablility of machine learning models. I'm also the first group of students joining the aiM-NRT program, where graduate students are trained to create AI algorithms for understanding and designing materials.
In past summers, I worked as a research intern at Microsoft Research and Meta AI.
I completed my B.S. in Computer Science at Kuang Yaming Honors School, Nanjing University in 2018.
Finalist of Best Student Paper Awards
Data Mining Section, INFORMS, 2022
Winner of Student Paper Competition
Physical and Engineering Sciences (SPES) and the Quality and Productivity (Q&P) Section, American Statistical Association, 2022
Outstanding Ph.D. Preliminary Exam Award
Department of Computer Science, Duke University, 2021
Ph.D. Fellowship
Department of Computer Science, Duke University, 2018 & 2019
Outstanding Graduate Award
Nanjing University, 2018
My research is primarily focus on the interpretable machine learning, especially building inherently interpretable models. This survey paper I wrote with my advisor and other colleagues in our lab summarizes important technical challenge areas in building inherently interpretable machine learning models. Some of my recent research projects are listed here.
We develop a module, concept whitening (CW), to decorrelate and align the axes of the latent space to predefined concepts. CW can provide a much clearer understanding for how the network gradually learns concepts over layers without hurting predictive performances. We are developing methods that discover useful domain-specific concepts in an unsupervised way, and explicitly represent these discovered concepts in the latent space of neural networks.
PAPER CODE TEASER VIDEOWe develop interpretable machine learning methods to discover key local and global features related to important dynamic material properties such as mechanical band gaps. These physically interpretable features can also transfer information about material properties across scale.
The Rashomon set is defined as a set of well-performing models. This term came from the Rashomon Effect coined by Leo Breiman, which describes the fact that many equally good models explain the same data well. We aim to construct the Rashomon set of various different model classes and study the application domains.
Every dataset is flawed, often in surprising ways that data scientists might not anticipate. We show how interpretable machine learning methods such as EBMs can help users detect problems that are lurking in their data. Specifically, we provide a number of case studies, where EBM discovers various types of common dataset flaws, including missing values, confounders, data drift, bias and fairness, and outliers. We also demonstrate that in some cases interpretable learning methods such as EBMs provide simple tools for correcting problems when correcting the data is difficult in other ways.
Zhi Chen, Sarah Tan, Urszula Chajewska, Cynthia Rudin, Rich Caruana
Rui Xin*, Chudi Zhong*, Zhi Chen*, Takuya Tagaki, Margo Seltzer, Cynthia Rudin
Zhi Chen, Alex Ogren, Chiara Daraio, Cate Brinson, Cynthia Rudin
Zijie Wang, Chudi Zhong, Rui Xin, Takuya Tagaki, Zhi Chen, Cynthia Rudin, Margo Seltzer
Zhi Chen, Sarah Tan, Harsha Nori, Kori Inkpen, Yin Lou, Rich Caruana
Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
Zhi Chen, Yijie Bei, Cynthia Rudin
Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, Lawrence Carin