Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/61ce-6054

poster

ACL 2024

August 12, 2024

Bangkok, Thailand

Enhancing Hallucination Detection through Perturbation-Based Synthetic Data Generation in System Responses

keywords:

hallucination detection

finetuning

data augmentation

Detecting hallucinations in large language model (LLM) outputs is pivotal, yet traditional fine-tuning for this classification task is impeded by the expensive and quickly outdated annotation process, especially across numerous vertical domains and in the face of rapid LLM advancements. In this study, we introduce an approach that automatically generates both faithful and hallucinated outputs by rewriting system responses. Experimental findings demonstrate that a T5-base model, fine-tuned on our generated dataset, surpasses state-of-the-art zero-shot detectors and existing synthetic generation methods in both accuracy and latency, indicating efficacy of our approach.

Downloads

SlidesTranscript English (automatic)

Next from ACL 2024

Reinforcement Tuning for Detecting Stances and Debunking Rumors Jointly with Large Language Models
poster

Reinforcement Tuning for Detecting Stances and Debunking Rumors Jointly with Large Language Models

ACL 2024

+2Hongzhan Lin
Ruichao Yang and 4 other authors

12 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved