EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Large Language Models (LLMs) can struggle to balance gullibility to misinformation and resistance to valid corrections in persuasive dialogues, a critical challenge for reliable deployment. We introduce DuET-PD (Dual Evaluation for Trust in Persuasive Dialogues), a framework evaluating multi-turn stance-change dynamics across dual dimensions: persuasion type (corrective/misleading) and domain (knowledge/safety), using MMLU-Pro and SALAD-Bench. With DuET-PD, we uncover a primacy effect in initial persuasion and a capability-robustness trade-off: capable models often resist valid corrections, especially in safety tasks, while open-source models show higher gullibility. To address this, we introduce Holistic DPO, a training approach balancing positive and negative persuasion examples. Unlike prompting or resist-only training, Holistic DPO enhances both robustness to misinformation and receptiveness to corrections. Our framework and quantitative insights, coupled with the Holistic DPO method, enable LLMs to better navigate persuasive dialogues, improving reliability in knowledge- and safety-critical contexts.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

SEA: Supervised Embedding Alignment for Token-Level Visual-Textual Integration in MLLMs
technical paper

SEA: Supervised Embedding Alignment for Token-Level Visual-Textual Integration in MLLMs

EMNLP 2025

+7
Ke Lin and 9 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved