EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Long chain-of-thought (CoT) prompting helps Large Language Models (LLMs) solve difficult problems, but very long traces often slow or even degrade performance on fast, intuitive “System-1” tasks. We introduce Connector-Aware Compact CoT (CAC-CoT) --- a method that deliberately restricts reasoning to a small, fixed set of connector phrases, steering the model toward concise and well --- structured explanations. Despite its simplicity, our synthetic method with Gemini-2.0-Flash yields a high-quality training quality. CAC-CoT achieves approx 85\% on GSM8K and approx 40\% on GPQA (System-2) while retaining approx 90\% on S1-Bench (System-1). Its reasoning traces average approx 300 tokens(ART), about one-third the length of baseline traces, delivering higher efficiency without loss of accuracy.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

CLEAR: A Comprehensive Linguistic Evaluation of Argument Rewriting by Large Language Models
poster

CLEAR: A Comprehensive Linguistic Evaluation of Argument Rewriting by Large Language Models

EMNLP 2025

Thomas Huber
Thomas Huber and 1 other author

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved