Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
3D object detection is a critical component of autonomous driving, yet its performance degrades severely in adverse weather due to the degradation of LiDAR point clouds. While existing LiDAR-4D radar fusion methods enhance robustness by incorporating weather-robust 4D radar data, they often depend on well geometric structures from LiDAR and so struggle to effectively exploit radar data in case of degraded LiDAR data. To tackle this challenge, we propose REL, a novel 4D radar-guided LiDAR geometric enhancement framework. It utilizes 4D radar features to dynamically generate virtual LiDAR points, effectively increasing the density of degraded LiDAR data. Moreover, a Position-Guided Cross Attention (PGCA) module is proposed to enhance the feature representation of virtual points, while an Adaptive Feature Fusion (AFF) module is designed to integrate virtual and real LiDAR features. Extensive experiments on the K-Radar and Vod-Fog datasets demonstrate that REL achieves state-of-the-art 3D object detection performance under diverse adverse weather conditions. Notably, REL improves the overall AP3D by 9.3% on K-Radar and boosts the cyclist class by up to 52.9% 3D mAP under the most severe foggy condition on Vod-Fog.
