Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
The spreading of AI-generated images (AIGI), driven by advances in generative AI, poses a significant threat to in- formation security and public trust. Existing AIGI detectors, while effective against images in clean laboratory settings, fail to generalize to in-the-wild scenarios. These real-world images are noisy, varying from “obviously fake” images to realistic ones derived from multiple generative models and further edited for quality control. We address in-the-wild AIGI detection in this paper. We introduce MIRAGE, a challenging benchmark designed to emulate the complexity of in-the-wild AIGI. MIRAGE is constructed from two sources: (1) a large corpus of Internet-sourced AIGI verified by human experts, and (2) a synthesized dataset created through the collaboration between multiple expert generators, closely simulating the realistic AIGI in the wild. Building on this benchmark, we propose MIRAGE-R1, a vision- language model with heuristic-to-analytic reasoning, a reflective reasoning mechanism for AIGI detection. MIRAGE-R1 is trained in two stages: a supervised-fine-tuning cold start, followed by a reinforcement learning stage. By further adopting a inference-time adaptive thinking strategy, MIRAGE-R1 is able to provide either a quick judgment or a more robust and accurate conclusion, effectively balancing inference speed and performance. Extensive experiments show that our model leads state-of-the-art detectors by 5% and 10% on MIRAGE and public benchmark, respectively. The benchmark and code will be made publicly available.