Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Traditional recommenders often fail to disentangle the motivations behind user choices. To address this, we propose MV-LLMRec, a framework that models interactions through three views: Structural, Intent, and Conformity. MV-LLMRec leverages LLMs to generate rich semantic representations for intent and conformity, which are refined through graph propagation and dynamically fused via an attention mechanism. We evaluate MV-LLMRec on the Amazon-Movie and Amazon-Book datasets and show that it significantly outperforms state-of-the-art baselines, validating our approach.