AAAI 2026

January 25, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

The rise of large language models (LLMs) has sparked interest in coding assistants. While general purpose programming languages are well supported, generating code for domain-specific languages remains a challenging problem for LLMs. In this paper, we focus on the LLM-based generation of Answer Set Programming (ASP) code, a particularly effective approach for finding solutions to combinatorial search problems. However, the effectiveness of LLMs in ASP code generation is hindered by the limited number of examples seen during their initial pre-training phase.

In this paper, we introduce a novel approach for solver-guided instruction-tuning of LLMs for addressing the highly complex semantic parsing task inherent in ASP code generation. We sample ASP statements for program continuations proposed by LLMs for unriddling logic puzzles and categorize them into chosen and rejected instances based on solver feedback. We then apply supervised fine-tuning to train LLMs on the curated data, and further improve robustness using a solver-guided search that includes best-of-N sampling. Our experiments demonstrate consistent improvements in two distinct prompting settings on different datasets.

Downloads

Paper

Next from AAAI 2026

Semantic Document Derendering: SVG Reconstruction via Vision-Language Modeling
poster

Semantic Document Derendering: SVG Reconstruction via Vision-Language Modeling

AAAI 2026

+3
Gilles Baechler and 5 other authors

25 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved