Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Deep Neural Networks (DNNs), as valuable intellectual property, face unauthorized use. Existing protections, such as digital watermarking, are largely passive; they provide only post-hoc ownership verification and cannot actively prevent the illicit use of a stolen model. This work proposes a proactive protection scheme, dubbed Authority backdoors," which embeds access constraints directly into the model. In particular, the scheme utilizes a backdoor learning framework to intrinsically lock a model's utility, such that it performs normally only in the presence of a specific trigger (e.g., a hardware fingerprint). But in its absence, the DNN's performance degrades to be useless. To further enhance the security of the proposed authority scheme, the certified robustness is integrated to prevent an adaptive attacker from removing the implanted backdoor. The resulting framework establishes a provably secure authority mechanism for DNNs, combining access control with robustness guarantees against adversarial attacks.
Extensive experiments on diverse architectures and datasets validate the effectiveness and robustness of the proposed framework. $\textit{The source code for our framework will be made available upon publication.}$