AAAI 2026

January 23, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

In this paper, we study the adversarial robustness of deep neural networks (DNN) for classification against optimal classifiers. We look at the smallest magnitude of possible additive perturbations that can change a classifier's output. We provide a matrix-theoretic explanation of the adversarial fragility of DNNs for classification. In particular, our theoretical results show that the adversarial robustness of a neural network can degrade as the input dimension d increases. Analytically, we show that the adversarial robustness of neural networks can be only 1/√d of the best possible adversarial robustness of optimal classifiers. Our theories match remarkably well with empirical results. The matrix-theoretic explanation aligns with an earlier information-theoretic feature-compression-based explanation for the adversarial fragility of neural networks.

Downloads

PaperTranscript English (automatic)

Next from AAAI 2026

Detecting Citation Hallucinations in Large Language Model Outputs (Student Abstract)
poster

Detecting Citation Hallucinations in Large Language Model Outputs (Student Abstract)

AAAI 2026

Vikranth Udandarao
Nipun Misra and 1 other author

23 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved