Content not yet available

This lecture has no active video or poster.

AAAI 2026

January 24, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Through reinforcement learning (RL) with outcome correctness rewards, large reasoning models (LRMs) have demonstrated substantial success on complex reasoning tasks, leveraging scaled inference computation. However, the sparse and one-sided reward, focused solely on final correctness, limit its ability to provide detailed supervision over internal reasoning process. This deficiency leads to suboptimal reasoning quality, manifesting as issues like over-thinking, under-thinking, redundant-thinking, and disordered-thinking. Inspired by the recent progress in LRM self-rewarding, we introduce a self-rewriting framework, where a model rewrites its own reasoning texts, and subsequently learns from the rewritten reasoning to improve the internal thought process quality. For algorithm design, we propose a selective rewriting approach wherein only "simple" samples, defined by the model's consistent correctness, are rewritten, thereby preserving all original loss of GRPO. For practical implementation, we compile rewriting and vanilla generating within one single batch, maintaining the scalability of the RL algorithm and introducing only 10\% overhead. Extensive experiments on diverse tasks with different model sizes validate the effectiveness of self-rewriting. In terms of the accuracy-length tradeoff, the self-rewriting approach achieves improved accuracy (+0.6) with substantially shorter reasoning (-46\%) even without explicit instructions to truncate reasoning, outperforming exsiting strong baselines. In terms of internal quality, self-rewriting achieves significantly higher scores (+7.2) under the LLM-as-a-judge metric. All relevant code and data will be released.

Downloads

Paper

Next from AAAI 2026

Are Language Models Any Good at Density Modeling?
poster

Are Language Models Any Good at Density Modeling?

AAAI 2026

+1Rui Mao
Sai Bedampeta and 3 other authors

24 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved