Content not yet available

This lecture has no active video or poster.

AAAI 2026

January 22, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Deep neural networks (DNNs) have revolutionized machine learning, driving breakthroughs from image classification to autonomous vehicles. However, a critical flaw undermines their reliability. Most DNNs operate under the unrealistic closed-set assumption that all potential classes have been encountered during training. This ignores the inevitability of outliers in real-world scenarios. In safety-critical domains like autonomous driving, this oversight can have dire, irreversible consequences. DNNs may confidently misclassify unknown outlier inputs as familiar classes. Addressing this vulnerability is essential for public trust and the adoption of Artificial Intelligence (AI) in high-stakes environments. Out-of-distribution (OOD) detection has therefore emerged as a linchpin for the safe and dependable deployment of intelligent systems. This thesis tackles the urgent need for robust OOD detection. It presents three innovative contributions that elevate the field and set new standards for reliability and safety across real-world contexts.

First, we confront the common yet unrealistic dataset-dependent OOD detection splitting definition that one labeled dataset is in-distribution (ID) and all the unlabeled datasets are OOD, under the impractical assumption that training data is clean, balanced.We introduce two novel frameworks to handle these complexities. Most real-world applications follow the semantically coherent OOD detection splitting definition, where some ID samples appear in these unlabeled datasets. The Adaptive Hierarchical Graph Cut (AHGC) network resolves multi-granularity label discrepancies between labeled and unlabeled datasets. It effectively identifies semantically coherent OOD samples that other methods misclassify. Complementing this, the Uncertainty-aware Adaptive Semantic Alignment (UASA) network tackles cross-domain and class-imbalanced data. It pioneers a prototype-based alignment strategy that closes the domain gap and is robust to imbalanced classes, addressing OOD detection and ID classification in the unlabeled target domain.

Second, we address the significant practical limitation of data scarcity by venturing into the domain of few-shot OOD detection. Recognizing that most existing methods require extensive labeled in-distribution data, we developed the Adaptive Multi-prompt Contrastive Network (AMCN). This model uniquely leverages the power of large-scale vision-language models (CLIP) to generate adaptive textual prompts for both in-distribution and out-of-distribution classes. By learning a discriminative class boundary from only a handful of samples, AMCN effectively compensates for the scarcity of training data and corresponding labels, marking a significant step towards data-efficient OOD detection.

Third, we extend the scope of OOD detection from static images to dynamic, complex video scenarios. We introduce the novel task of OOD Action Detection (ODAD) in untrimmed videos. To solve this, we propose the Uncertainty-Guided Appearance-Motion Association Network (UAAN). This approach reasons over spatial-temporal inter-object interactions by synergistically modeling appearance and motion features. It allows for the simultaneous localization and identification of both known (in-distribution) and unknown (out-of-distribution) actions. This is a critical capability for safety-critical applications like autonomous driving.

Collectively, these contributions redefine the landscape of OOD detection across four pivotal dimensions: semantic granularity understanding, cross-domain robustness, data efficiency, and temporal dynamics modeling. The methodologies introduced not only surpass existing benchmarks but also prove their value in diverse, real-world settings where robustness is non-negotiable.

Downloads

Paper

Next from AAAI 2026

Towards Fairness in Transportation Gig Markets: Identifying, Imitating, and Mitigating Algorithm Discrimination via Deep Reinforcement Learning
technical paper

Towards Fairness in Transportation Gig Markets: Identifying, Imitating, and Mitigating Algorithm Discrimination via Deep Reinforcement Learning

AAAI 2026

Zijian Zhao

22 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved