Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Explanation fidelity, which measures how accurately an explanation reflects a model’s true reasoning, remains critically underexplored in recommender systems. We introduce SPINRec (Stochastic Path Integration for Neural Recommender Explanations), a model-agnostic explanation method that adapts path-integration techniques to the sparse and implicit nature of recommendation data. To address the limitations of prior approaches, SPINRecemploys a stochastic baseline sampling strategy. Instead of integrating from a fixed or unrealistic baseline, it samples multiple plausible user profiles from the empirical data distribution and selects the most faithful attribution path. This design accounts for the importance of both observed and unobserved interactions in modern recommenders, resulting in more stable, accurate, and personalized explanations. We conduct the most comprehensive fidelity evaluation to date in this domain. Our experiments span three models (MF, VAE, NCF), three datasets (ML1M, Yahoo! Music, Pinterest), and a suite of counterfactual metrics, including AUC-based perturbation curves and fixed-length diagnostics. SPINRec consistently outperforms strong baselines such as SHAP, LIME, FIA, and LXR across all evaluation settings. These results establish a new benchmark for faithful explainability in recommendation. Code and evaluation tools will be released publicly to support reproducibility and future research.
