Welcome to Yue Dai’s Homepage
I am a final-year Ph.D. student in the Department of Computer Science at the University of Pittsburgh, advised by Dr. Youtao Zhang and Dr. Xulong Tang. I received my M.S. in Telecommunications from the University of Maryland and my B.E. in Electrical Engineering from Beihang University, China.
My research interests center on machine learning systems and accelerators, with a focus on efficient systems and robust algorithms for deep graph learning. Specifically, I work on accelerating Graph Neural Network-based deep graph learning models across diverse platforms and exploring the robustness of these applications against adversarial attacks.
🌟 I am actively seeking tenure-track faculty positions and research scientist positions in industry. Please kindly reach out to me regarding any potential opportunities.
CV/Email/Google Scholar/Linkedin
News
[01/2025] One paper is accepted by ISCA2025. Thanks to all the collaborators!
[01/2025] One paper is accepted by ASPLOS2025. Thanks to all the collaborators!
[01/2025] One paper is accepted by ICLR2025. Thanks to all the collaborators!
[08/2023] One paper is accepted by ICCD2023. Thanks to all the collaborators!
[01/2023] One paper is accepted by ICLR2023 (spotlight). Thanks to all the collaborators!
[10/2022] One paper is accepted by HPCA2023. Thanks to all the collaborators!
[08/2022] One paper is accepted by ArgMining2022 (best paper). Thanks to all the collaborators!
[06/2022] One paper is accepted by CCF Transactions on High Performance Computing. Thanks to all the collaborators!
Publications
* co‑first author
Yue Dai, Xulong Tang, Youtao Zhang. 2025. Cascade: A Dependency‑Aware Efficient Training Framework for Temporal Graph Neural Network. 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’2025).
Li, Yingheng, Yue Dai, Aditya Pawar, Rongchao Dong, Jun Yang, Youtao Zhang, and Xulong Tang. 2025. Using Reinforcement Learning to Guide Graph State Generation for Photonic Quantum Computers. The 52nd International Symposium on Computer Architecture (ISCA’2025).
Sheng Li, Qitao Tan, Yue Dai, Zhenglun Kong, Tianyu Wang, Jun Liu, Ao Li, Ninghao Liu, Yufei Ding, Xulong Tang, Geng Yuan. 2025. Mutual Effort for Efficiency: A Similarity‑based Token Pruning for Vision Transformers in Self‑Supervised Learning. The Thirteenth International Conference on Learning Representations (ICLR’2025).
Yue Dai, Youtao Zhang, Xulong Tang. 2023. CEGMA: Coordinated elastic graph matching acceleration for graph matching networks. 2023 IEEE International Symposium on High‑Performance Computer Architecture (HPCA’2023).
Yue Dai, Xulong Tang, Youtao Zhang. 2023. FlexGM: An Adaptive Runtime System to Accelerate Graph Matching Networks on GPUs. 2023 IEEE 41st International Conference on Computer Design (ICCD’2023).
Sheng Li*, Geng Yuan*, Yue Dai*, Youtao Zhang, Yanzhi Wang, Xulong Tang. 2023. Smartfrz: An efficient training framework using attention‑based layer freezing. The 11th International Conference on Learning Representations (ICLR’2023).
Yue Dai, Xulong Tang, Youtao Zhang. 2022. An efficient segmented quantization for graph neural networks. CCF Transactions on High Performance Computing (THPC’2023), 4(4), 461‑473.
Zhexiong Liu*, Meiqi Guo*, Yue Dai*, Diane Litman. 2022. ImageArg: A multi‑modal tweet dataset for image persuasiveness mining. Proceedings of the 9th Workshop on Argument Mining at International Conference on Computational Linguistics (COLING’2022).
Sheng Li, Geng Yuan, Yawen Wu, Yue Dai, Chao Wu, Alex K Jones, Jingtong Hu, Yanzhi Wang, Xulong Tang. 2024. EdgeOL: Efficient in‑situ Online Learning on Edge Devices. arXiv preprint arXiv:2401.16694.
Justin Brody, Samuel Barham, Yue Dai, Christopher Maxey, Donald Perlis, David Sekora, Jared Shamwell. 2016. Reasoning with grounded self‑symbols for human‑robot interaction. 2016 AAAI Fall Symposium Series.
Xuejun Liu, Haiying Luan, Wenbai Chen, Yue Dai, Jiandong Liu, Bo Lan. 2014. Electrical nonlinearity pre‑compensation for CO‑OFDM system. Optik, 125(2), 616‑619.