EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Large Language Models (LLMs) have demonstrated an impressive ability to retrieve and summarize complex information, but their reliability under conflicting contexts remains poorly understood. We introduce an adversarial extension of the Needle-in-a-Haystack framework where three mutually exclusive “needles” are embedded into long documents. By systematically manipulating factors such as position, repetition, layout, and domain relevance, we evaluate how LLMs handle contradictions. We find that models almost always fail to signal uncertainty and instead confidently select a single alternative, exhibiting strong and consistent biases toward repetition, recency, and specific surface form. We further analyze if these patterns are shared within a model family and size, as well as perform both probability-based and generation-based retrieval. Our framework highlights critical limitations in current LLMs’ robustness to contradiction, revealing potential shortcomings in RAG systems' ability to handle noisy or manipulated inputs, and pose challenges for deployment in high-stakes applications.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

so much depends / upon / a whitespace: Why Whitespace Matters for Poets and LLMs
poster

so much depends / upon / a whitespace: Why Whitespace Matters for Poets and LLMs

EMNLP 2025

+1
Maria Antoniak and 3 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved