Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Federated learning synchronizes models through gradient transmission and aggregation. However, these gradients pose significant privacy risks, as sensitive training data is embedded within them. Existing gradient-based reconstruction attacks suffer from significantly degraded reconstruction quality when gradients are perturbed by noise-a common defense mechanism. In this paper, we introduce Gradient-Guided Conditional Diffusion Models (GG-CDMs) for reconstructing private images from leaked gradients without prior conditions. Our approach leverages the inherent denoising capabilities of diffusion models to circumvent the partial protection offered by noise perturbation, thereby enhancing attack efficacy under such defenses. Furthermore, we provide a rigorous theoretical analysis of reconstruction error bounds and the decrease rate of attack loss, characterizing the relationship between noise magnitude, model architectures, and reconstruction quality. Extensive experiments validate the effectiveness of our method and confirm our theoretical findings, demonstrating our method's superior reconstruction quality from noise-perturbed gradients by leveraging GG-CDMs.