Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Long chain-of-thought (CoT) prompting helps Large Language Models (LLMs) solve difficult problems, but very long traces often slow or even degrade performance on fast, intuitive “System-1” tasks. We introduce Connector-Aware Compact CoT (CAC-CoT) --- a method that deliberately restricts reasoning to a small, fixed set of connector phrases, steering the model toward concise and well --- structured explanations. Despite its simplicity, our synthetic method with Gemini-2.0-Flash yields a high-quality training quality. CAC-CoT achieves approx 85\% on GSM8K and approx 40\% on GPQA (System-2) while retaining approx 90\% on S1-Bench (System-1). Its reasoning traces average approx 300 tokens(ART), about one-third the length of baseline traces, delivering higher efficiency without loss of accuracy.