AAAI 2026

January 23, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Explanatory Interactive Learning (XIL) is a powerful interactive learning framework designed to enable users to customize and correct AI models by interacting with their explanations. In a nutshell, XIL algorithms select a number of items on which an AI model made a decision (e.g., images and their tags) and present them to users, together with corresponding explanations (e.g., image regions that drive the model’s decision). Then, users supply corrective feedback for the explanations, which the algorithm uses to improve the model. Despite showing promise in debugging tasks, recent studies have raised concerns that explanatory interaction may trigger order effects, a well-known cognitive bias in which the sequence of presented items influences users’ trust and, critically, the quality of their feedback. We argue that these studies are not entirely conclusive, as the experimental designs and tasks employed differ substantially from common XIL use cases, complicating interpretation. To clarify the interplay between order effects and explanatory interaction, we ran a larger-scale user study (n = 713 total) designed to mimic common XIL tasks. Specifically, we assessed order effects both within and between debugging sessions by manipulating the order in which correct and wrong explanations are presented to participants. Order effects had a limited but significant impact on users’ agreement with the model (i.e., a behavioral measure of their trust), and only when examined within debugging sessions, not between them. The quality of users’ feedback reached satisfactory levels overall, with order effects exerting only a small and inconsistent influence both within and between sessions. Overall, our findings suggest that order effects do not pose a significant issue to the successful employment of XIL approaches. More broadly, our work contributes to the ongoing efforts for understanding human factors in AI.

Downloads

Paper

Next from AAAI 2026

Learning Vision-Based Neural Network Controllers with Semi-Probabilistic Safety Guarantees
poster

Learning Vision-Based Neural Network Controllers with Semi-Probabilistic Safety Guarantees

AAAI 2026

+2
Yiannis Kantaros and 4 other authors

23 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved