Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Infodemics and health misinformation have significant negative effects on individuals and society, exacerbating confusion and increasing hesitancy in adopting recommended health measures. Recent advancements in Generative AI, capable of producing realistic, human-like text and images, have significantly accelerated the spread and expanded the reach of health misinformation, resulting in an alarming surge in its dissemination. To combat Infodemics, most of the existing work focus on developing misinformation datasets from social media and fact-checking platforms, but face limitations in topical coverage, inclusion of AI-generation, and accessibility of raw content. To address these issues, we present MM-Health, a large scale multimodal misinformation dataset in the health domain consisting of 34,746 news article encompassing both textual and visual information. MM-Health includes human-generated multimodal information (5,776 articles) and AI-generated multimodal information (28,880 articles) from various SOTA generative AI models. Additionally, We benchmarked our dataset against three tasks—reliability checks, originality checks, and fine-grained AI detection—demonstrating that existing SOTA models struggle to accurately distinguish the reliability and origin of information. Our dataset aims to support the development of misinformation detection across various health scenarios, facilitating the detection of human and machine-generated content at multimodal levels. Our code and data is available at: \url{https://anonymous.4open.science/r/MM-Health-Supplementary-Material-E14C}}.