Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
The slow sampling speed of diffusion models hinders their application in 3D LiDAR scene completion. To address this, we propose Distillation-DPO, a novel framework that accelerates sampling through score distillation while simultaneously enhancing generation quality via preference alignment. Distillation-DPO follows a three-step procedure. First, the student model generates paired completion scenes with different initial noises. Second, using LiDAR scene evaluation metrics as preference, we construct winning and losing sample pairs. Third, as our core innovation, Distillation-DPO optimizes the student model by exploiting the difference in score functions between the teacher and student models on the paired completion scenes. This operation performs variational score distillation of the student model but simultaneously encourages the distilled student to prefer the winning samples over the losing ones. Extensive experiments demonstrate that Distillation-DPO achieves higher-quality scene completion than state-of-the-art diffusion models, while accelerating sampling by over 5-fold. To our knowledge, our work is the first to integrate the preference learning principle of DPO into the distillation of diffusion models, offering a new paradigm of preference-aligned distillation.
