Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/3n56-1n41

poster

ACL 2024

August 12, 2024

Bangkok, Thailand

Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models

keywords:

multimodal hallucination snowballing

large vision-language models

hallucination

Though advanced in understanding visual information with human languages, Large Vision-Language Models (LVLMs) still suffer from multimodal hallucinations. A natural concern is that during multimodal interaction, the generated hallucinations could influence the LVLMs' subsequent generation. Thus, we raise a question: $\textit{When presented with a query relevant to the previously generated hallucination, will LVLMs be misled and respond incorrectly, even though the ground visual information exists?}$ To answer this, we propose a framework called $\textit{MMHalSnowball}$\ to evaluate LVLMs' behaviors when encountering generated hallucinations, where LVLMs are required to answer specific visual questions within a curated hallucinatory conversation. Crucially, our experiment shows that the performance of open-source LVLMs drops by at least $31\%$, indicating that LVLMs are prone to accept the generated hallucinations and make false claims that they would not have supported without distractions. We term this $\textit{Multimodal Hallucination Snowballing}$. To mitigate this issue, we further propose a training-free method called $\textit{Residual Visual Decoding},$ where we revise the output distribution of LVLMs with the one derived from the residual visual input, providing models with direct access to the visual information. Experiments show that our method can mitigate more than $24\%$ of the snowballed multimodal hallucination while maintaining capabilities.

Downloads

Transcript English (automatic)

Next from ACL 2024

Beyond Traditional Benchmarks: Analyzing Behaviors of Open LLMs on Data-to-Text Generation
poster

Beyond Traditional Benchmarks: Analyzing Behaviors of Open LLMs on Data-to-Text Generation

ACL 2024

Ondřej DušekZdeněk Kasner
Zdeněk Kasner and 1 other author

12 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved