AAAI 2026

January 22, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Latent Diffusion Models have become a powerful tool for generating high-fidelity unrestricted adversarial examples. However, the existing methods typically perturb only the initial latent or rely on prompt engineering, which is ill-suited to the iterative nature of the diffusion process, plus optimization instability due to external text prompts and cumulative drift that push the adversarial images off the data manifold. In this paper, we propose a hierarchical attack framework that operates in alignment with the model's generative manifold and leverages intermediate denoising states to maximize attack transferability and visual fidelity. Extensive experiments show that the proposed attack improves adversarial transferability by $10$-$20$\% against a diverse set of normally-trained models and achieves over 10.5\% higher success rate against adversarially-defended models, while simultaneously enhancing visual quality by $1.0$-$1.2$ FID reduction and 16.7\% LPIPS improvements.

Downloads

Paper

Next from AAAI 2026

Difficulty-Aware Learning Curve Extrapolation
poster

Difficulty-Aware Learning Curve Extrapolation

AAAI 2026

Mengyang Li and 1 other author

22 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved