Greetings and welcome to my homepage


I am currently a fifth-year Ph.D. candidate in Computer Science and Technology from Tsinghua-Berkeley Shenzhen Institute (TBSI), Tsinghua Shenzhen International Graduate School (SIGS), Tsinghua University, advised by Professor Yong Jiang and Professor Shu-Tao Xia. Before that, I received my B.S. degree with honor in Mathematics and Applied Mathematics from Ningbo University (Yangming Innovation Class) in 2018, advised by Professor Lifeng Xi.

At the beginning of my Ph.D. journey, I studied tree-based methods for their interpretability and good theoretical properties. Currently, my research mainly focuses on AI security, especially backdoor learning, adversarial learning, data privacy, and copyright protection in deep learning. My research is supported by the Tsinghua ‘Future Scholar’ Ph.D. Fellowship.

Currently, I am working with Professor Bo Li at UIUC as a visiting Ph.D. student. I was a research intern at Ant Security Lab (2021, 2022), working with Dr. Haiqin Weng and Dr. Tao Wei; a research intern at Tencent AI Lab (2019, 2020), working with Dr. Baoyuan Wu and Dr. Zhifeng Li (supported by the Tencent Rhino-bird Elite Training Program); an intern at Wukong Investment (2018), working with Dr. Xinji Liu on ML-based high-frequency trading.

I am always willing to project co-operations. Feel free to drop me an email if you have any ideas or suitable opportunities to discuss!


  • 01/2023: Two papers about backdoor attacks and defenses are accepted by ICLR 2023. Their codes will be released soon.
  • 12/2022: CFP: We are organizing the IEEE Trojan Removal Competition, challenging you to design efficient and effective end-to-end DNN Trojan removal techniques. A generous prize pool is waiting for your participation!
  • 11/2022: One paper is accepted by AAAI 2023. Its codes will be released soon.
  • 10/2022: So glad and humble to know that our paper about dataset copyright protection is selected as the `Oral’ by the NeurIPS 2022. Its codes will be released soon.
  • 09/2022: Our Springer book ‘Digital Watermarking for Machine Learning Models’ is accepted and currently under the production process. We contributed to its Chapter 4: The Robust and Harmless Model Watermarking.
  • 06/2022: Our survey about backdoor attacks and defenses is accepted by the IEEE TNNLS.

Useful Resources

BackdoorBox: A Python Toolbox for Backdoor Attacks and Defenses

Github Repo about Backdoor Learning Resources

Technical Report about the ATT&CK Matrix of AI Security