Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Large Language Models (LLMs) have introduced paradigm-shifting approaches in natural language processing. Yet, their transformative in-context learning (ICL) capabilities remain unutilized, especially in customer service dialogue summarization—a domain plagued by generative hallucinations, detail omission, and inconsistencies. We present Chain-of-Interactions (CoI), a novel transformative, single-instance, and multi-step framework that orchestrates information extraction, self-correction, and evaluation through sequential interactive generation chains. By strategically leveraging LLMs' ICL capabilities through precisely engineered prompts, CoI dramatically enhances abstractive task-oriented dialogue summarization (ATODS) quality and usefulness. Our comprehensive evaluation methodology combines novel LLM-based and standard automatic evaluation metrics with rigorous human assessment across real-world and benchmark customer service dialogue interactions. Results reveal CoI's decisive superiority, significantly outperforming state-of-the-art Chain-of-Density (CoD) approaches across nine distinct summarization quality dimensions. This research addresses critical gaps in task-oriented dialogue summarization for customer service applications and establishes new standards for harnessing LLMs' reasoning capabilities in practical, industry-relevant contexts\footnote{Dataset, code, and materials are available: \url{https://anonymous.4open.science/r/CoI-BFC0}}.