Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Explainability has emerged as a pillar of Trustworthy AI for achieving safety in critical domains. However, introducing explainability to boost transparency of black-box AI systems can create unforeseen vulnerabilities. Previous research has drawn attention to privacy leakage, malicious or otherwise, that explainable interfaces can cause thus leading to inadvertent identification of individuals and/or exposure of sensitive personal information. Privacy preservation methods used in response to this leakage, are found to adversely affect utility of the system such as the model accuracy and explanation quality. The proposed thesis will examine the advancement of Privacy Enhancing Technologies (PETs) in XAI keeping users at the core of the design process. The main objectives of the research are determining defenses for privacy attacks, building interpretable algorithms for private models and examining user requirements for privacy preserving XAI. The research is expected to yield characteristics of privacy preserving XAI, guidelines, and recommendations for effectively building privacy compliant XAI while considering the diverse needs of users. The research outcomes will enable developers and researchers in designing XAI that is safe for deployment and balances the triad of privacy, explainability and utility.
