EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Large language models (LLMs) encode vast amounts of world knowledge but remain static once trained, making timely integration of emerging facts prohibitively expensive via full retraining. Knowledge‐editing techniques have thus emerged to inject or overwrite specific facts into LLMs, yet they either over‐rely on superficial cues or incur complex, iterative pipelines that collapse under noisy, multi‐hop conditions. We introduce Reason-KE, an end‐to‐end reasoning-chain-based editing framework that steers a pretrained LLM through four structured stages—fact acknowledgment, relevance determination, selective application, and final reasoning—to filter distractors in a single pass. Trained on MQuAKE‐CF with up to four irrelevant facts, Reason-KE elevates Qwen2.5‐7B’s multi‐hop QA accuracy to 90.2\% (↑17.6 pp) while suffering merely 6.3\% drop under heavy distraction and <1\% when answers are leaked. Our quantitative analysis confirms Reason-KE’s resilience and efficiency, establishing a new state of the art for reliable LLM knowledge updates. The code will be released.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

QUARTZ: QA-based Unsupervised Abstractive Refinement for Task-oriented Dialogue Summarization
poster

QUARTZ: QA-based Unsupervised Abstractive Refinement for Task-oriented Dialogue Summarization

EMNLP 2025

+1Mohamed Imed Eddine Ghebriout
Mohamed Imed Eddine Ghebriout and 3 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved