CV
Education
Ph.D. in Operations Research University of Michigan - Ann Arbor 2019 - 2024
B.S. in Mathematics (Honors) Peking University, China 2015 - 2019
Experience
Applied Scientist, Amazon AGI Bellevue, WA | 2024 - Present
- Tech Lead for diffusion language models - leading development of Amazon Nova diffusion language models and scaling laws
- Developing reinforcement learning scaling laws for Amazon Nova models
- Building context-engineering environments and agentic debugging tools used across internal projects
- Developed Amazon Nova video generation model
- Developed Logistics Foundation Model
Applied Scientist Intern, Amazon Bellevue, WA | 2022, 2023
- Built combined machine learning and optimization pipeline for supply chain disruption management
- Reduced disruption loss by 10%
Graduate Student Researcher, University of Michigan Ann Arbor, MI | 2019 - 2024
- Researched adversarial learning and representation learning with diffusion models
- Achieved state-of-the-art accuracy and efficiency on benchmark tasks
- Developed theoretical framework for maximum-entropy sampling (10× faster solutions)
- Proposed federated learning algorithms for decision-dependent distributions
Honors and Awards
- Rackham Predoctoral Fellowship - University of Michigan (2023-2024)
- IOE Department Fellowship - University of Michigan (2020-2021)
- Outstanding Graduate - Peking University School of Mathematical Science (2019)
- Merit Student - Peking University (2016-2018)
- Yizheng Scholarship - Peking University (2017)
- 99-undergraduate Scholarship - Peking University (2016)
Publications
Jiachen Lei, Julius Berner, Jiongxiao Wang, Zhongzhu Chen, Zhongjia Ba, Kui Ren, Jun Zhu, Anima Anandkumar. (2025). "Robust Representation Consistency Model via Contrastive Denoising." ICLR 2025.
Yiquan Li*, Zhongzhu Chen*, Kun Jin*, Jiongxiao Wang*, Jiachen Lei, Bo Li, Chaowei Xiao. (2024). "Consistency Purification: Effective and Efficient Diffusion Purification towards Certified Robustness." NeurIPS 2024.
Zhongzhu Chen. (2024). "On Algorithmic Advances for Maximum-Entropy Sampling." Ph.D. Dissertation, University of Michigan - Ann Arbor.
Kun Jin*, Tongxin Yin*, Zhongzhu Chen*, Zeyu Sun, Xueru Zhang, Yang Liu, Mingyan Liu. (2024). "Performative Federated Learning: A Solution to Model-Dependent and Heterogeneous Distribution Shift." AAAI 2024.
Zhongzhu Chen, Marcia Fampa, and Jon Lee. (2024). "Generalized Scaling for the Constrained Maximum-Entropy Sampling Problem." Mathematical Programming.
Jiawei Zhang*, Zhongzhu Chen*, Huan Zhang, Chaowei Xiao, and Bo Li. (2023). "DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing." USENIX Security 23, pp. 4787-4804.
Zhongzhu Chen, Marcia Fampa, and Jon Lee. (2023). "On Computing with Some Convex Relaxations for the Maximum-Entropy Sampling Problem." INFORMS Journal on Computing, 35(2), 368-385.
Chaowei Xiao*, Zhongzhu Chen*, Kun Jin*, Jiongxiao Wang*, Weili Nie, Mingyan Liu, Anima Anandkumar, Bo Li, and Dawn Song. (2023). "DensePure: Understanding Diffusion Models for Adversarial Robustness." ICLR 2023.
Zhongzhu Chen, Marcia Fampa, and Jon Lee. (2022). Technical Note - Masking Anstreicher Linx Bound for Improved Entropy Bounds. Operations Research.
Zhongzhu Chen, Marcia Fampa, Amélie Lambert, and Jon Lee. (2021). "Mixing Convex-Optimization Bounds for Maximum-Entropy Sampling." Mathematical Programming.
Service
Conference Reviewer: NeurIPS, ICML, ICLR, WACV, AAAI, ISCO
Journal Reviewer: Mathematical Programming, Operations Research