Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Vision-Language Models (VLMs) have made significant progress in quality assessment tasks. However, a fundamental paradox arises when applying them to Point Cloud Quality Assessment (PCQA). Existing VLMs, designed for image-text pairs, are inherently incompatible with 3D point cloud data due to the modality gap. While some PCQA research attempts to adapt point clouds to VLMs by projecting them directly onto 2D planes, this approach inevitably sacrifices crucial spatial structure information essential for accurate quality assessment. Conversely, directly integrating a dedicated 3D branch into a VLM-based PCQA framework introduces feature space misalignment and an influx of quality-insensitive information. To bridge these fundamental conflicts hindering the adaptation of VLMs to the PCQA domain, we propose the PMP-PCQA framework, which leverages the inherent mapping relationship between points and pixels to seamlessly apply VLMs in PCQA. Our approach introduces three key innovations: a Spatial Awareness Enhancer(SAE) module that enriches the image features with spatial coordinate clues to reinforce geometric awareness in 2D visual representations; a Fine-to-coarse Consistency Alignment(FCA)* module that bridges the gap between 2D and 3D modalities by leveraging point-pixel correspondences to construct bridging features; a Text-Guided Adaptive Miner(TAM)** module that dynamically suppresses quality-insensitive features to mine discriminative visual clues for PCQA. Extensive evaluations demonstrate that PMP-PCQA consistently outperforms state-of-the-art methods across multiple benchmarks.