AAAI 2026

January 24, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Outlier detection (OD) aims to identify abnormal instances, known as outliers or anomalies, by learning typical patterns of normal data, or inliers. Performing OD under an unsupervised regime--without any information about anomalous instances in the training data--is challenging. A recently observed phenomenon, known as the $\textit{inlier-memorization (IM) effect}$, where deep generative models (DGMs) tend to memorize inlier patterns during early training, provides a promising signal for distinguishing outliers. However, existing unsupervised approaches that rely solely on the IM effect still struggle when inliers and outliers are not well-separated or when outliers form dense clusters. To address these limitations, we incorporate $\textit{active learning}$ to selectively acquire informative labels, and propose $\textit{IMBoost}$, a novel framework that explicitly reinforces the IM effect to improve outlier detection. Our method consists of two stages: 1) a $\textit{warm-up}$ phase that induces and promotes the IM effect, and 2) a $\textit{polarization}$ phase in which actively queried samples are used to maximize the discrepancy between inlier and outlier scores. In particular, we propose a novel query strategy and tailored loss function in the polarization phase to effectively identify informative samples and fully leverage the limited labeling budget. We provide a theoretical analysis showing that the IMBoost consistently decreases inlier risk while increasing outlier risk throughout training, thereby amplifying their separation. Extensive experiments on diverse benchmark datasets demonstrate that IMBoost not only significantly outperforms state-of-the-art active OD methods but also requires substantially less computational cost.

Downloads

SlidesPaperTranscript English (automatic)

Next from AAAI 2026

The Curious Case of Analogies: Investigating Analogical Reasoning in Large Language Models
poster

The Curious Case of Analogies: Investigating Analogical Reasoning in Large Language Models

AAAI 2026

+2Jungwoo ParkJaewoo Kang
Jaewoo Kang and 4 other authors

24 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved