Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Snapshot compressive imaging (SCI) captures multispectral images (MSIs) using a single coded two-dimensional (2-D) measurement, but reconstructing high-fidelity MSIs from these compressed inputs remains a fundamentally ill-posed challenge. While diffusion-based reconstruction methods have recently raised the bar for quality, they face critical limitations: a lack of large-scale MSI training data, adverse domain shifts from RGB-pretrained models, and inference inefficiencies due to multi-step sampling. These drawbacks restrict their practicality in real-world applications. In contrast to existing methods—which either follow costly iterative refinement or adapt subspace-based embeddings for diffusion models (e.g. DiffSCI, PSR‑SCI)—we introduce a fundamentally different paradigm: a self-supervised One-Step Diffusion (OSD) framework designed specifically for SCI. The key novelty lies in using a single-step diffusion refiner to correct an initial reconstruction, eliminating iterative denoising entirely while preserving generative quality. Moreover, we adopt a self-supervised equivariant learning strategy to train both the predictor and refiner directly from raw 2-D measurements, enabling generalization to unseen domains without ground-truth MSI. To further address limited MSI data, we design a band-selection–driven distillation strategy that transfers core generative priors from large-scale RGB datasets, effectively bridging the domain gap. Extensive experiments confirm that our approach sets a new standard—yielding PSNR gains of 3.44dB, 1.61dB, and 0.33dB on the Harvard, NTIRE, and ICVL datasets respectively—while cutting reconstruction time by 97.5\%, from 8.9s to just 0.22s per image. This leap in efficiency and adaptability makes our method a major advancement in SCI reconstruction—both accurate and practical for real-world deployment.