EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Retrieval Augmented Generation (RAG) has become the standard non-parametric approach for equipping Large Language Models (LLMs) with up-to-date knowledge and mitigating catastrophic forgetting common in continual learning. However, standard RAG, relying on independent passage retrieval, fails to capture the interconnected nature of human memory crucial for complex reasoning (associativity) and contextual understanding (sense-making). While structured RAG methods like HippoRAG 2 utilize knowledge graphs built from triples, we argue that the inherent context loss of knowledge triples limits fidelity. We introduce PropRAG, leveraging context-rich propositions and a novel LLM-free online beam search over proposition paths to find multi-step reasoning chains. PropRAG achieves state-of-the-art zero-shot Recall@5 and F1 scores on 2Wiki, HotpotQA, and MuSiQue, advancing non-parametric continual learning by improving evidence retrieval through richer representation and efficient reasoning path discovery.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text
poster

TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text

EMNLP 2025

+2
Zhi Chen and 4 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved