EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Compositional generalization is one of the important abilities that large language models (LLMs) need to have for semantic parsing. Previous research typically relies on dataset-specific designs or a large number of samples in demonstrations to improve the compositional generalization of LLMs on semantic parsing. We revisit this issue and find that when the number of samples in a demonstration is limited to a theoretical lower bound for achieving compositional generalization (minimum-coverage), current advanced LLMs cannot arbitrarily achieve good compositional generalization generically on different semantic parsing datasets without dataset-specific designs. To solve this problem, we propose Multi-level Component Composition (MC^2), a minimum-coverage and dataset-agnostic framework based on input primitives, which aims to generically help LLMs achieve compositional generalization by selecting and organizing samples from multiple compositional levels that satisfy the primitive coverage. Experiments and analysis show that MC^2 can effectively improve the compositional generalization of LLMs on different semantic parsing datasets in the minimum-coverage setting.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

GeoDANO: Geometric VLM with Domain Agnostic Vision Encoder
poster

GeoDANO: Geometric VLM with Domain Agnostic Vision Encoder

EMNLP 2025

+3
Seunghyuk Cho and 5 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved