EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Disagreement detection is a crucial task in natural language processing (NLP), particularly in analyzing online discussions and social media content. Large language models (LLMs) have demonstrated significant advancements across various NLP tasks. However, the performance of LLM in disagreement detection is limited by two issues: conceptual gap and reasoning gap. In this paper, we propose a novel two-stage framework, Concept Alignment and Reasoning Enhancement (CARE), to tackle the issues. The first stage, Concept Alignment, addresses the gap between expert and model by performing sub-concept taxonomy extraction, aligning the model's comprehension with human experts. The second stage, Reasoning Enhancement, improves the model's reasoning capabilities by introducing curriculum learning workflow, which includes rationale to critique and counterfactual to detection for reducing spurious association. Extensive experiments on disagreement detection task demonstrate the effectiveness of our framework, showing superior performance in zero-shot and supervised learning settings, both within and across domains.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

GeoEdit: Geometric Knowledge Editing for Large Language Models
poster

GeoEdit: Geometric Knowledge Editing for Large Language Models

EMNLP 2025

+6Jiannong Cao
Jiannong Cao and 8 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved