Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/cxxd-ys33

poster

ACL 2024

August 12, 2024

Bangkok, Thailand

Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the Key?

keywords:

multi-agent discussion

llm

prompt engineering

reasoning

Recent progress in LLMs discussion suggests that multi-agent discussion improves the reasoning abilities of LLMs. In this work, we reevaluate this claim through systematic experiments, where we propose a novel group discussion framework to enrich the set of discussion mechanisms. Interestingly, our results show that a single-agent LLM with strong prompts can achieve almost the same best performance as the best existing discussion approach on a wide range of reasoning tasks and backbone LLMs. We observed that the multi-agent discussion performs better than a single agent only when there is no demonstration in the prompt. Further study reveals the common interaction mechanisms of LLMs during the discussion. Our code can be found in \url{https://github.com/HKUST-KnowComp/LLM-discussion}.

Downloads

SlidesTranscript English (automatic)

Next from ACL 2024

Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts?
poster

Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts?

ACL 2024

+3Xueqi Cheng
Hexiang Tan and 5 other authors

12 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved