I am a PhD candidate in the Computer Science Department at Duke University, supervised by Prof. Cynthia Rudin. My primary research interest is the interpretablility of machine learning models. I'm also the first group of students joining the aiM-NRT program, where graduate students are trained to create AI algorithms for understanding and designing materials. I worked as a research intern at Microsoft Research, where I was mentored by Rich Caruana . I completed my B.S. in Computer Science at Kuang Yaming Honors School, Nanjing University in 2018.
More information can be found in my CV.
My research is primarily focus on the interpretable machine learning, especially building inherently interpretable models. This survey paper I wrote with my advisor and other colleagues in our lab summarizes important technical challenge areas in building inherently interpretable machine learning models. Some of my recent research projects are listed here.
We develop a module, concept whitening (CW), to decorrelate and align the axes of the latent space to predefined concepts. CW can provide a much clearer understanding for how the network gradually learns concepts over layers without hurting predictive performances. We are developing methods that discover useful domain-specific concepts in an unsupervised way, and explicitly represent these discovered concepts in the latent space of neural networks.
We develop interpretable machine learning methods to discover key local and global features related to important dynamic material properties such as mechanical band gaps. These physically interpretable features can also transfer information about material properties across scale.
Every dataset is flawed, often in surprising ways that data scientists might not anticipate. We show how interpretable machine learning methods such as EBMs can help users detect problems that are lurking in their data. Specifically, we provide a number of case studies, where EBM discovers various types of common dataset flaws, including missing values, confounders, data drift, bias and fairness, and outliers. We also demonstrate that in some cases interpretable learning methods such as EBMs provide simple tools for correcting problems when correcting the data is difficult in other ways.
We are developing face super-resolution framework, based on searching in the latent space that approximates a natural image manifold, to create super-resolved images that are realistic and qualitatively resemble the target identity.
Zhi Chen, Sarah Tan, Harsha Nori, Kori Inkpen, Yin Lou, Rich Caruana
Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
Zhi Chen, Yijie Bei, Cynthia Rudin
Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, Lawrence Carin
Zhi Chen, Alex Ogren, Chiara Daraio, Cate Brinson, Cynthia Rudin