Greetings and Welcome to My Homepage
Biography
I am currently a Research Fellow at Nanyang Technological University, working with Prof. Tianwei Zhang and Prof. Dacheng Tao. Before that, I was a Research Professor (similar to a Tenure-track Associate Professor in the U.S.) in the group led by Prof. Zhan Qin and Prof. Kui Ren, the State Key Laboratory of Blockchain and Data Security at Zhejiang University and also in HIC-ZJU. I received my Ph.D. degree with honors in Computer Science and Technology from Tsinghua University, advised by Prof. Yong Jiang and Prof. Shu-Tao Xia. I received my B.S. degree with honors in Mathematics from Ningbo University (Yangming Class), advised by Prof. Lifeng Xi. I also collaborated closely with Dr. Zhifeng Li (from Tencent) and Prof. Bo Li (from UIUC) during my Ph.D. journey.
My research mainly focuses on Trustworthy ML and Responsible AI, especially AI Safety, AI Copyright Protection, and Explainable AI. My long-term goal is to make AI-based systems more secure and copyright-preserving during their full life cycle. Recently, I focus more on Trustworthy Generative AI (e.g., LLMs and Diffusion Models). I always chase for simple yet effective methods with deep insights and theoretical support.
My research has been published in multiple top-tier conferences and journals, such as S&P, ICML, ICLR, NeurIPS, CVPR, IEEE TPAMI, IEEE TIFS, and IEEE TDSC. I served as the Associate Editor of Pattern Recognition, the Area Chair of ICML and ACM MM, the Senior Program Committee Member of AAAI and IJCAI, and the Reviewer of IEEE TPAMI, IJCV, IEEE TIFS, IEEE TDSC, etc. My research has been featured by major media outlets, such as IEEE Spectrum and MIT Technology Review. I was the recipient of Outstanding Junior Faculty Award at Zhejiang University (2023), the Best Paper Award at PAKDD (2023), the Rising Star Award at WAIC (2023), the KAUST Rising Stars in AI (2024), and the DAAD AInet Fellowship (2024).
Selected Research
- AI Safety
- Training Phrase: [TDSC’25, ICLR’25(a), ICLR’25(b), TIFS’24(a), TIFS’24(b), ICML’24(a), ICML’24(b), CVPR’24(a), CVPR’24(b), ICLR’24(a), ICLR’24(b), PR’23, NeurIPS’23, ICCV’23, ICLR’23(a), ICLR’23(b), TNNLS’22, ICLR’22(a), ICLR’22(b), ICCV’21, ICLR’21]
- Inference Phrase: [IEEE S&P’25, USENIX Security’25, ICLR’25, IJCV’24, AAAI’23, PR’22, ECCV’20]
- AI Copyright
- Data Copyright: [IEEE S&P’25, ICLR’25, NeurIPS’24, TIFS’24, TIFS’23, NeurIPS’23, PR’22, NeurIPS’22]
- Model Copyright: [TPAMI’25, NDSS’25, CVPR’25, ECCV’24, ICCV’23, AAAI’22]
- Explainable AI: [ICLR’24, TIFS’24]
For Potential Students and Collaborators
I am always looking for highly self-motivated students and research interns to join exciting research projects on Trustworthy ML and Responsible AI in our group at Zhejiang University. I will provide responsible and hands-on guidance. Besides, I am always willing to work together on interesting projects with external collaborators. Drop me an email if you are interested!
News
- 03/2025: One paper about backdoor attack is accepted by IEEE TDSC 2025.
- 03/2025: One paper about prompt inversion attack is accepted by IEEE S&P 2025.
- 02/2025: One paper about T2I model watermarking is accepted by CVPR 2025.
- 02/2025: Our paper about model ownership verification is accepted by IEEE TPAMI.
- 01/2025: One paper about membership inference attack is accepted to USENIX Security 2025.
Useful Resources
BackdoorBox: A Python Toolbox for Backdoor Attacks and Defenses