EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Long-context language models (LCLMs), characterized by their extensive context window, are becoming popular. However, despite they are nearly perfect at standard long-context retrieval tasks, our evaluations demonstrate they are not good at 2 basic cases, "multi-matching retrieval,'' and "logic-based retrieval'', which are beyond LCLMs' ability boundary. But we find they can be well addressed with a sufficient number of reasoning steps, guided by specific CoT prompts, indicating the potential necessity of combining long-context tasks with CoT methods for more advanced long context handling. However, purely CoT-based methods are too time-consuming when the context is very long, which means accurate and efficient long-context handling still has a long way to go.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

CANDY: Benchmarking LLMs' Limitations and Assistive Potential in Chinese Misinformation Fact-Checking
poster

CANDY: Benchmarking LLMs' Limitations and Assistive Potential in Chinese Misinformation Fact-Checking

EMNLP 2025

+2Chen Huang
Ruiling Guo and 4 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved