EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Slot filling is a crucial subtask in spoken language understanding (SLU), traditionally implemented as a cascade of speech recognition followed by one or more natural language understanding (NLU) components. The recent advent of speech-based large language models (speechLLMs), which integrate speech and textual foundation models, has opened new avenues for achieving speech understanding tasks in a more unified, generative, and instruction-following manner while promising data and compute efficiency with zero-shot abilities, generalizing to unseen slot labels. We address the slot filling task by creating an empirical upper bound for the task, identifying performance, robustness, and generalization gaps, and proposing improvements to the training data, architecture, and training strategies to bridge the gap. We show that each of these measures reduces the gap substantially, while highlighting practical challenges and providing empirical guidance and insights for harnessing these emerging models.

Downloads

Paper

Next from EMNLP 2025

Augmenting Compliance Guaranteed Conversational AI: Context-Aware Knowledge Base Expansion with LLMs and Combinatorial Optimization
poster

Augmenting Compliance Guaranteed Conversational AI: Context-Aware Knowledge Base Expansion with LLMs and Combinatorial Optimization

EMNLP 2025

+1Mengze Hong
Yuanqin He and 3 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved