Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Multi-view clustering (MVC) aims to enhance clustering performance by integrating complementary information from diverse sources. Existing deep MVC methods often face trade-offs and compromises between learning shared consensus representations and preserving view-specific characteristics: they either employ separate encoders that limit collaboration or rely on a single shared encoder at the expense of diversity. Recently, Mixture-of-Experts (MoE) models have been introduced to MVC to facilitate cooperation, but their flattened expert pool design leads to entangled shared and specific information, while their routing mechanism overlooks valuable cross-view context. To address these challenges, we propose a novel frameworkâDecoupled Mixture-of-Experts with Context-Aware Routing (DMCAR). First, we design a Decoupled MoE (D-MoE) architecture comprising a public expert pool for learning shared representations and private expert pools for capturing unique information from each view, structurally enforcing representation decoupling. Second, we introduce a Context-Aware Hierarchical Routing (CAHR) mechanism that leverages a global context vector to guide routing decisions when selecting experts from the shared pool, enabling more intelligent cross-view collaboration. Finally, we adopt a multi-level contrastive learning paradigm, enforcing semantic consistency in shared representations through cross-view alignment loss while promoting decoupling between shared and specific representations via orthogonality constraints. Extensive experiments on multiple benchmark datasets demonstrate that DMCAR significantly outperforms state-of-the-art methods across various clustering metrics and validates the effectiveness of each component in our framework.