Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
3D object detection in adverse weather remains a critical challenge for autonomous driving systems, particularly in smoke-obscured environments where sparse and noisy LiDAR measurements degrade perception performance. To address the scarcity of real-world smoke data, this paper proposes a physically-grounded simulation framework to synthesize realistic LiDAR point clouds of smoke and augment large-scale driving datasets for improved perception robustness. First, we present a 3D fluid dynamics-based smoke simulation framework in Unity3D, which models the realistic spatial diffusion and temporal evolution of smoke particles. Coupled with a physically accurate LiDAR perception module, our system captures complex light interactions—such as beam attenuation, scattering, and multi-path effects—to generate high-fidelity, physically consistent smoke point clouds. Second, we propose a range image-based data fusion strategy that seamlessly integrates the simulated smoke point clouds into large-scale real-world LiDAR datasets (e.g., Waymo). This approach accurately emulates LiDAR scanning characteristics and naturally incorporates occlusion effects, enabling realistic smoke integration without compromising spatial consistency. To validate our approach, we collect a real-world LiDAR smoke dataset (LiSmoke) and conduct extensive experiments using state-of-the-art 3D detectors. Results demonstrate that models trained with our augmented synthetic data achieve significant improvements in smoke-affected scenarios, while maintaining competitive performance in clear-weather conditions. Our work provides a cost-effective solution for enhancing perception robustness in safety-critical environments.
