Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
The proliferation of generative image models has revolutionized AIGC creation while amplifying concerns over content provenance and manipulation forensics. Existing methods are typically either unable to localize tampering or restricted to specific generative settings, limiting their practical utility. We propose GenPTW, a General watermarking framework that unifies Provenance tracing and Tamper localization in latent space. It supports both in-generation and post-generation embedding without altering the generative process, and is plug-and-play compatible with latent diffusion models (LDMs) and visual autoregressive (VAR) models. To enable accurate tracing and tamper localization, we propose a dual-module design: a cross-attention fusion mechanism adaptively embeds watermark guided by latent features, while a spatial fusion module reinforces localization by injecting complete watermark information. A tamper-aware extractor further unifies provenance and manipulation decoding, tightly coupling watermark semantics with forensic objectives. Experiments show that GenPTW maintains high visual fidelity and strong robustness against diverse AIGC-editing.
