AAAI 2026

January 22, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Retrieval-Augmented Generation (RAG) improves the factual accuracy of large language models by grounding responses in external content. However, most RAG systems assume access to static and well-organized corpora with fixed retrieval logic. In practice, real-world sources are heterogeneous and unlabeled, including user-uploaded documents, manuals, and datasets. Effective access in such settings requires adaptive and self-directed retrieval behavior. We present \textsc{SegMem‑RAG}, a memory-augmented RAG framework that learns to route queries across multiple unlabeled corpora based on experience. It incrementally updates a structured memory and uses self-reflection to guide retrieval over time without supervision. Experimental results demonstrate that \textsc{SegMem‑RAG} significantly outperforms recent baselines in generation quality on multi-corpus QA tasks.

Downloads

Paper

Next from AAAI 2026

Emotion-Coherent Reasoning for Multimodal LLMs via Emotional Rationale Verifier
poster

Emotion-Coherent Reasoning for Multimodal LLMs via Emotional Rationale Verifier

AAAI 2026

+1Yong Man Ro
Hyeongseop Rha and 3 other authors

22 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved