EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Decoding strategies manipulate the probability distribution underlying the output of a language model and can therefore affect both generation quality and its uncertainty. In this study, we investigate the impact of decoding strategies for uncertainty estimation in Large Language Models (LLMs). Our experiments show that Contrastive Search produces better uncertainty estimates across a range of alignment-tuned LLMs on average. In contrast, the benefits of these strategies sometimes diverge when the model is only post-trained with supervised fine-tuning, i.e. without explicit alignment.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Can You Trick the Grader? Adversarial Persuasion of LLM Judges
poster

Can You Trick the Grader? Adversarial Persuasion of LLM Judges

EMNLP 2025

+2Kyomin Jung
Yerin Hwang and 4 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved