AAAI 2026

January 24, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Recent advances in large language models (LLMs) have greatly improved their reasoning and decision-making abilities when deployed as agents. Richer reasoning, however, often comes at the cost of longer chain of thought (CoT), hampering interaction efficiency in real-world scenarios. Nevertheless, there still lacks systematic definition of LLM‑Agent efficiency, hindering targeted improvements. To this end, we introduce dual‑efficiency, comprising (i) step-level efficiency, which minimizes tokens per step, and (ii) trajectory-level efficiency, which minimizes the number of steps to complete a task. Building on this definition, we propose DEPO, a dual-efficiency preference‑based optimization method that jointly rewards succinct responses and fewer action steps. Experiments on WebShop and BabyAI show that DEPO cuts token usage by up to 60.9\% and steps by up to 26.9\%, while achieving up to a 29.3\% improvement in task performance. DEPO also generalizes to three out-of-domain math benchmarks and retains its efficiency gains when trained on only 25\% of the data. The code is available in Appendix.

Downloads

SlidesPaperTranscript English (automatic)

Next from AAAI 2026

Oxytrees: Model Trees for Bipartite Learning
poster

Oxytrees: Model Trees for Bipartite Learning

AAAI 2026

+3
Ricardo Cerri and 5 other authors

24 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved