Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Large Language Models (LLMs) perform excellently in fake news detection tasks, but their outputs are often accompanied with hallucination phenomena, i.e., generated content that is contradictory or deviates from facts. Previous studies have mostly mitigated hallucinations through prompt design. However, this paper reveals that regions in news articles which easily induce hallucination in LLMs highly correspond to challenges of fake news detectors. Based on this finding, we propose a fake news detection framework(PHPFND) based on post-hoc processing of LLMs hallucination. Specifically, our framework includes a hallucination detection module(ISHD) based on information structuring that detecting three types of hallucinations in LLMs in a targeted manner, and a hallucination-driven feature enhancement mechanism (HDFE) that incorporates hallucination signals as explicit features into sentence-level encoding and feature fusion to guide the model’s attention toward high-risk regions. Experimental results on two mainstream fake news datasets show that the our proposed method significantly outperforms mainstream LLMs-based baselines.
