EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

With the advancement of large language models (LLMs), more concerns about their controllability have been raised in recent research. In this paper, we argue for the importance of Knowledge-Constrained Responsiveness (KCR), ensuring that LLMs comply with human-defined constraints. However, KCR is an implicit and unobservable capability of LLMs, functioning as a black box that currently cannot be quantitatively assessed. To address this, we first introduce the definition of "permitted boundary" and define the "boundary bias" to depict KCR. We propose six metrics to quantify the boundary bias of LLMs and subsequently assess the KCR. Furthermore, we establish a benchmark with two new datasets, KCR-SimpleQA and KCR-WebNLG, to evaluate the performance of LLMs. Our extensive experiments show that several tested LLMs still struggle to varying degrees when adhering to constraints, especially without the corresponding knowledge.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

How a Bilingual LM Becomes Bilingual: Tracing Internal Representations with Sparse Autoencoders
poster

How a Bilingual LM Becomes Bilingual: Tracing Internal Representations with Sparse Autoencoders

EMNLP 2025

+5Tatsuro InabaBenjamin Heinzerling
Benjamin Heinzerling and 7 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved