Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Existing LiDAR point cloud (LPC) data coding methods primarily focus on balancing compression efficiency and reconstruction quality according to the human vision system (HVS). However, these methods rarely consider the requirements of downstream scene understanding tasks from the perspective of the machine vision system (MVS). To address this challenge, we explore the maximum degree of LPC compression that has negligible impact on perception accuracy, called LPC-based just recognizable compression distortion (lpcJRCD). Specifically, we introduce a novel point-wise quantization approach for constructing a MVS-based LiDAR dataset and present a new lpcJRCD-guided intelligent compression framework tailored for MVS applications. To enhance MVS-based LPC compression efficiency, we develop a dual-feature interaction (DFI) module that fuses point and voxel features. Additionally, we propose a mask-based loss function to ensure accurate point-wise quality level prediction. Experimental results demonstrate the effectiveness of our proposed model in reducing the average bit rate by up to 94.98\% while preserving perception accuracy in autonomous
