AAAI 2026

January 23, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Outcome-based reinforcement learning has made notable advances in training language models (LMs) for reasoning. However, without explicit incentives and controls, this paradigm has limitations and instability in eliciting high-quality reasoning trajectories with diverse actions—particularly for models whose pretraining lacked extensive reasoning data. To this end, we introduce MetaAct-RL, a new RL framework that frames LMs’ thought process as sequential decision making over meta-actions. In this framework, the model chooses and executes a high-level action at each step—such as forward reasoning, critique, or refinement—to gradually reach the correct answer. To encourage deeper exploration, richer action diversity, and to improve sampling efficiency in the RL optimization process, Meta-Act-RL incorporates appropriate length-based reward and regularization, and a key-state restart mechanism. Extensive experiments across six benchmark tasks show that Meta-Act-RL improves reasoning performance by $7.99$ on Llama3.2-1B and $7.17$ on Llama3.1-8B relative to vanilla RL method. Moreover, on the challenging AIME-2024, our method outperforms the vanilla RL by $7.5$ with Qwen2.5-1.5B.

Downloads

Paper

Next from AAAI 2026

Structure-Enhanced Adapter for Self-Supervised Heterogeneous Graph Learning
poster

Structure-Enhanced Adapter for Self-Supervised Heterogeneous Graph Learning

AAAI 2026

+2Dongxiao He
Dongxiao He and 4 other authors

23 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved