Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Despite the rapid progress of multimodal large language models (MLLMs), their capacity for low-level visual perception in underwater environments remains underexplored. To address this gap, we present UQ-Bench, the first systematically designed benchmark for evaluating the ability of MLLMs to perceive and assess underwater image quality at the low-level visual attribute level. UQ-Bench comprises three components: (1) UW-Perception, a dataset of 3,000 underwater images paired with targeted questions on key degradations such as color cast, blur, contrast, and exposure, covering both global and local perceptual dimensions; (2) UW-Describe, a dataset of 500 images with expert-annotated gold-standard descriptions for assessing the accuracy of model-generated text; and (3) UW-Eval, an evaluation protocol employing human mean opinion scores (MOS) for quantitative quality assessment. To ensure rigorous and reproducible benchmarking, we propose a GPT-assisted evaluation framework that aligns model outputs with expert references and enables fine-grained analysis of distortion perception. Experimental results demonstrate that while MLLMs exhibit preliminary competence in underwater low-level visual tasks, they still fall short in capturing subtle degradations and achieving human-level consistency, highlighting the need for further advances in foundation models for marine vision. Both the benchmark and code will be made publicly available.