Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Explainable AI (XAI) seeks to answer the question: which features of the data led a model to make its decision? Existing approaches are either model-agnostic (e.g., LIME, SHAP)—flexible but unstable—or logic-based (e.g., sufficient reasons, knowledge compilation)—principled but often overly complex. This work introduces a probabilistic relaxation of sufficient reasons, termed probabilistic sufficient reasons, which balances flexibility with theoretical guarantees. We analyze its computational properties, propose tractable subclasses, and outline future directions for scalable algorithms and applications.
