AAAI 2026

January 22, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Continual learning for action recognition is a critical capability for next-generation Extended Reality (XR) systems. Yet it faces a severe real-world challenge: strict user privacy that prohibits data rehearsal. While recent prompt-based continual learning methods show promise, we argue their flat, single-granularity design is structurally mismatched to the complexity of human actions. This monolithic architecture fails to model the inherent hierarchical structure of individual actions and overlooks standard action primitives shared across tasks, resulting in suboptimal performance and hindered knowledge transfer. To overcome this limitation, we propose DPCA, a novel spatio-temporal continual learning framework with multi-granularity adaptive prompting. DPCA learns three synergistic components to resolve this mismatch. First, the task-specific prompter employs a multi-granularity query system to capture the unique, compositional semantics of each action. Second, the task-agnostic prompter learns a globally shared vocabulary of action primitives," providing a stable and generalizable knowledge base to mitigate catastrophic forgetting. Furthermore, we introduce a Dissimilarity Attention Rectification at each granularity level, which leverages a reverse attention mechanism to model class-agnostic background information, effectively alleviating overfitting. The synergy between these components enables robust model adaptation without requiring access to past data. Rigorous experiments on the NTU RGB+D benchmark, under a strict rehearsal-free, few-shot protocol, confirm that DPCA establishes a new state-of-the-art, advancing the realization of brilliant and privacy-respecting XR systems.

Downloads

Paper

Next from AAAI 2026

MedMKEB: A Comprehensive Knowledge Editing Benchmark for Medical Multimodal Large Language Models
poster

MedMKEB: A Comprehensive Knowledge Editing Benchmark for Medical Multimodal Large Language Models

AAAI 2026

+4Yongzhi Cao
Yongzhi Cao and 6 other authors

22 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved