Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
3D Gaussian splatting (3DGS) has recently demonstrated significant potential in computer vision, enabling high-fidelity 3D scene reconstruction with real-time rendering and fast training times. However, existing methods struggle in large, visually sparse, geometric self-similarity environments due to heavy reliance on image-based feature matching and depth information. In this work, we propose a novel reconstruction pipeline that reduces the dependence on visual features by incorporating IMU and LiDAR data to generate accurate point clouds and robustly localize images within the scene. Global colorization is achieved through 3D-to-2D projections of the localized images, which are then used to supervise 3DGS training. Our results demonstrate that the proposed pipeline significantly enhances the quality of 3D reconstruction for large, sparse scenarios, opening up new opportunities for applications in remote mapping and autonomous inspection.