Selected Research
Representative projects across my research areas.
LLM Unlearning Evaluation
Do LLMs really forget? We probe whether models truly unlearn through knowledge correlation and confidence-aware evaluation.
MoEEdit
Efficient and routing-stable knowledge editing for Mixture-of-Experts LLMs — surgically updating knowledge without disrupting expert routing.
Privacy Risks in LLM Unlearning
Revealing underestimated privacy risks for minority populations in large language model unlearning — showing that current methods disproportionately fail for underrepresented groups.
News
Proposal lead author — recognized for "high degree of alignment with national AI strategic focus."
Invited talks on rethinking unlearning and jailbreaking in LLMs.
Fortunate to receive the Georgia Tech CSIP Award for 2025.
Recent Updates
Invited Talk: "From Atomic Facts to Structured Internal Knowledge: Rethinking Unlearning and Jailbreaking in LLMs" at Google Research Seminar.
Proposal Lead Author: "Structured-Knowledge-Guided Agentic LLM Jailbreaking and Defense" — NAIRR Pilot Award.
Invited Talk: "From Atomic Facts to Structured Internal Knowledge" at IBM Research.
Fortunate to receive the CSIP Outstanding Research Award for 2025.
New preprint: CKA-Agent achieves 96–99% jailbreak success on frontier LLMs via adaptive tree search. Paper · Project
Two papers accepted at NeurIPS 2025.
Two papers accepted at ICML 2025.
Started AI Research Internship at Amazon, Seattle.
Publications
Full list on Google Scholar. * = equal contribution.
Preprints
The Trojan Knowledge: Bypassing Commercial LLM Guardrails via Harmless Prompt Weaving and Adaptive Tree Search
Guarding Multiple Secrets: Enhanced Summary Statistic Privacy for Data Sharing
Conference & Journal Papers
MoEEdit: Efficient and Routing-Stable Knowledge Editing for Mixture-of-Experts LLMs
Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness
Differentially Private Relational Learning with Entity-level Privacy Guarantees
Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning
Generalization Principles for Inference over Text-Attributed Graphs with Large Language Models
Privately Learning from Graphs with Applications in Fine-tuning Large Language Models
Differentially Private Graph Diffusion with Applications in Personalized PageRanks
On the Inherent Privacy Properties of Discrete Denoising Diffusion Models
Learning Scalable Structural Link Representations with Bloom Signatures
SLA2P: Self-supervised Anomaly Detection with Adversarial Perturbation
Understanding Non-linearity in Graph Neural Networks from the Bayesian-Inference Perspective
Experience
Amazon Seattle, WA
AI Research Intern — Reflection and Exploration-based LLM Action Planning
JP Morgan Chase NYC
AI Research Intern — Graph Data Generation via Margin Relaxed Schrödinger Bridges
Education
Xi'an Jiaotong University
B.S. in Mathematics & Applied Mathematics
Qian Xuesen College · GPA 3.89/4.00
Georgia Tech — Visiting
Honors Student Program, School of Mathematics
Invited Talks
Google Research Seminar
"From Atomic Facts to Structured Internal Knowledge: Rethinking Unlearning and Jailbreaking in LLMs"
IBM Research
"From Atomic Facts to Structured Internal Knowledge: Rethinking Unlearning and Jailbreaking in LLMs"
Honors
- 2025 CSIP Outstanding Research Award
- 2025 Lambda's Research Grant
- 2025 OpenAI Researcher Access Grant
- 2022 NeurIPS Travel Award
- 2019 IEEE BigData Student Award
- "Zhufeng" Scholarship — First Prize (Ministry of Education)
- Outstanding Student Award — XJTU