Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
3D street scene reconstruction is a challenging yet crucial task for autonomous driving. Many reconstruction methods often overlook two key limitations for high-quality driving scene reconstruction, sensitivity to camera parameter noise from high-speed vehicles and heavy reliance on precise dynamic object annotations of datasets. To resolve these issues, we propose DenoiseGS, a simple yet effective approach based on explicit 3D Gaussian splatting. Specifically, we propose a novel learnable Delta attribute per Gaussian primitive that operates on the image plane during rasterization to mitigate the impact of noisy camera parameters through modulating the inputs of the $\alpha$-blending process. To enhance the representation of this Delta attribute, we propose a DeltaEstimator that encodes viewing direction and contextual cues to facilitate view dependence. We also extend additional CUDA operations to enable efficient gradient update for the delta attribute. Furthermore, to overcome the limitation of inaccurate annotations for dynamic objects, we propose a learnable B-spline trajectory optimization with few control points to model the trajectory of a moving object. Comprehensive experiments conducted on nuScenes and Waymo Open Dataset demonstrate that our DenoiseGS outperforms some state-of-the-art methods across all metrics of both reconstruction quality and novel view synthesis.
