Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Explainable AI (XAI) as a field has traditionally focused and succeeded at providing explanations for single-shot decision-making systems like classifiers. However, there is a growing consensus that some of the most challenging problems within explainable AI lie not within single-shot decision-making settings, but rather in sequential decision-making ones. Moreover, by switching to a sequential decision-making setting, we also see novel opportunities to build explainable agents that are actually able to reason about when and how to provide explanations to maximize their effectiveness. As of right now, most of the work within explainable sequential decision-making tends to happen in isolation within subcommunities like planning, reinforcement learning, and robotics. While this has led to the development of useful explanation methods, the fragmentation has also made it harder to adopt and transfer advances and insights from one subcommunity to another. Addressing this requires us to take a more holistic view of explainable sequential decision-making, one that allows us to account for and understand the contributions from every subfield in a single unified framework. In this talk, we will talk about the efforts that have been made towards this end and some of the most important challenges that remain.
