AAAI 2026

January 25, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Code models have become integral to modern software development, yet they remain vulnerable to backdoor attacks through poisoned training data. Current code backdoor attacks struggle with a critical trade-off. Static triggers using fixed code patterns achieve high transferability across different settings, but are easily detected by defenses. Conversely, dynamic triggers that adapt to code context evade detection effectively but exhibit poor cross-dataset transferability. Moreover, existing dynamic approaches unrealistically assume attackers have access to victims' training data, limiting their practical applicability. To overcome these limitations, we introduce Sharpness-aware Transferable Adversarial Backdoor (STAB), a novel attack that achieves transferability and stealthiness without accessing victim data. Our key idea is that adversarial perturbations discovered in flat regions of the loss landscape transfer more effectively across datasets than those found in sharp minima. STAB leverages this by training a surrogate model with Sharpness-Aware Minimization (SAM) to guide model parameters toward these flat regions. We then employ a Gumbel-Softmax based optimization to transform the discrete search for trigger tokens into a differentiable process, generating context-aware adversarial triggers. Experiments on three datasets and two code models demonstrate the superiority of STAB. Compared to static triggers, STAB significantly improves stealiness, maintaining 73.2% average attack success rate after defense (ASR-D) versus near-zero for static approaches. In cross-dataset scenarios, STAB also outperforms the state-of-the-art dynamic attack, AFRAIDOOR, with a 12.4% higher ASR-D, while preserving model performance on clean inputs.

Downloads

Paper

Next from AAAI 2026

Neural Outline Cache for Real-time Anti-aliasing Font Rendering
poster

Neural Outline Cache for Real-time Anti-aliasing Font Rendering

AAAI 2026

+4
Xiangqi Chen and 6 other authors

25 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved