EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Aligned instruction following models can better fulfill user requests than their unaligned counterparts. However, it has been shown that there is a length bias in evaluation of such models, and that training algorithms tend to exploit this bias by learning longer responses. In this work we show how to train models that can be controlled at inference time with instructions containing desired length constraints. Such models are superior in length instructed evaluations, outperforming standard instruction following models such as GPT4, Llama 3 and Mixtral.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Contra4: Evaluating Contrastive Cross-Modal Reasoning in Audio, Video, Image, and 3D
poster

Contra4: Evaluating Contrastive Cross-Modal Reasoning in Audio, Video, Image, and 3D

EMNLP 2025

+6Artemis PanagopoulouChris Callison-Burch
Chris Callison-Burch and 8 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved