AAAI 2026

January 23, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Large Language Models (LLMs) demonstrate strong capabilities in code generation but often lack adaptability in planning and refinement. We propose Self-PR, a framework that integrates adaptive plan selection and iterative repair to improve correctness and generalization. Self-PR constructs a reusable plan database via task clustering and trains a selector to choose task-specific strategies. Incorrect outputs are refined through multi-round feedback until correctness. Trained only on HumanEval, Self-PR generalizes well to out-of-distribution tasks (MBPP), improving pass@1 by +4.9\% on HumanEval and +5.5\% on MBPP compared to Modularization-of-Thought prompting. Experiments across Llama-3 (8B, 70B) and GPT-4o-mini confirm robustness and scalability. These findings suggest that adaptive planning and feedback-driven repair are essential for reliable LLM-based code generation.

Downloads

Paper

Next from AAAI 2026

Behavioral-Similarity and Clustering-Based Methods for Static Graph Estimation in Hybrid GNNs (Student Abstract)
poster

Behavioral-Similarity and Clustering-Based Methods for Static Graph Estimation in Hybrid GNNs (Student Abstract)

AAAI 2026

+2Mingyu Guo
Mingyu Guo and 4 other authors

23 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved