I am a sixth-year Ph.D. student
at the New York University Systems Group, advised by Prof. Jinyang Li and Prof. Aurojit Panda.
I obtained a bachelor degree of Computer Science from Tsinghua University in 2019.
I am broadly interested in systems research, particularly in systems abstraction, reliability, debugging, and profiling aspects. My recent research focus is on Machine Learning Systems and Machine Learning Compilers.
Publications
- Understanding Stragglers in Large Model Training Using What-if Analysis.
Jinkun Lin, Ziheng Jiang, Zuquan Song, Sida Zhao, Menghan Yu, Zhanghan Wang, Chenyuan Wang, Zuocheng Shi, Xiang Shi, Wei Jia, Zherui Liu, Shuguang Wang, Haibin Lin, Xin Liu, Aurojit Panda, and Jinyang Li. To appear at OSDI 2025.
- Stateful Large Language Model Serving with Pensieve. [Paper]
Lingfan Yu, Jinkun Lin, Jinyang Li. EuroSys 2025.
- NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers. [Paper]
Jiawei Liu*, Jinkun Lin* (Equal Contribution), Fabian Ruffy, Cheng Tan, Jinyang Li, Aurojit Panda, Lingming Zhang. ASPLOS 2023.
- Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments. [Paper][Poster]
Jinkun Lin*, Anqi Zhang* (Equal Contribution), Mathias Lécuyer, Jinyang Li, Aurojit Panda, Siddhartha Sen. ICML 2022.
- HOP: Heterogeneity-Aware Decentralized Training. [Paper]
Qinyi Luo, Jinkun Lin, Youwei Zhuo, Xuehai Qian. ASPLOS 2019.