Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Vision-language models (VLMs) are highly effective at semantic reasoning but struggle with a basic perceptual skill: recognizing hidden content in optical illusions and camouflaged images, which humans can perceive through simple adjustments like squinting or zooming. We introduce HC-Bench, a benchmark of over 1,200 images containing hidden text, objects, and illusions. Our evaluation across 11 state-of-the-art VLMs shows near-zero accuracy even when explicit prompts are provided, in stark contrast to human performance. Surprisingly, we find that downscaling the input image to a low resolution (32–128 pixels) restores model accuracy to over 99%. Additional experiments, including fine-tuning and image blurring, support the hypothesis that high-resolution inputs introduce redundant local features that interfere with global pattern recognition. These findings reveal a critical architectural blind spot in current VLMs and point toward the need for hybrid models with multi-scale visual processing. Our results have implications for applications in medical imaging, security, and other real-world settings that require robust visual understanding.