Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/pf31-rw94

poster

ACL 2024

August 12, 2024

Bangkok, Thailand

Improving LLM Generations via Fine-Grained Self-Endorsement

keywords:

inference-time improvement

self-endorsement

hallucination

This work studies mitigating fact-conflicting hallucinations for large language model (LLM) at inference time. Particularly, we propose a self-endorsement framework that leverages the fine-grained fact-level comparisons across multiple sampled responses. Compared with prior ensemble methods (e.g., self-consistency) that perform response-level selection, our approach can better alleviate hallucinations for knowledge-intensive tasks. Our approach can broadly benefit smaller and open-source LLMs as it mainly conducts simple content-based comparisons. Experiments on Biographies show that our method can effectively improve the factuality of generations with simple and intuitive prompts across different scales of LLMs. Besides, comprehensive analyses on TriviaQA and GSM8K demonstrate the potential of self-endorsement for broader application.

Downloads

SlidesTranscript English (automatic)

Next from ACL 2024

ProLex: A Benchmark for Language Proficiency-oriented Lexical Substitution
poster

ProLex: A Benchmark for Language Proficiency-oriented Lexical Substitution

ACL 2024

Zhou Yu
Xuanming Zhang and 2 other authors

12 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved