EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

In tasks like question answering and fact-checking, models must discern relevant information from extensive corpora in an "open-book" setting. Conventional transformer-based models excel at classifying input data, but (i) often falter due to sensitivity to noise and (ii) lack explainability regarding their decision process. To address these challenges, we introduce ATTUN, a novel transformer architecture designed to enhance model transparency and resilience to noise by refining the attention mechanisms. Our approach involves a dedicated module that directly modifies attention weights, allowing the model to both improve predictions and identify the most relevant sections of input data. We validate our methodology using fact-checking datasets and show promising results in question answering. Experiments show up to a 51% improvement in F1 score over state-of-the-art systems for detecting relevant context, and up to an 18% gain in task accuracy when integrating ATTUN into a model.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Pathway to Relevance: How Cross-Encoders Implement a Semantic Variant of BM25
poster

Pathway to Relevance: How Cross-Encoders Implement a Semantic Variant of BM25

EMNLP 2025

Catherine Chen and 2 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved