Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Multi-view clustering has been found useful to leverage diverse data sources for accurate and robust underlying data representations. It typically relies on effectively integrating the latent features from different views through allocating weights while simultaneously mining their specificity and consensus information. However, it remains open how to achieve a more fine-grained sample-level weight allocation for promoting view-specific information fusion and view-shared consensus. To address this problem, we propose a novel multi-expert learning framework named Gated Variational Graph AutoEncoder with Competition and Consensus (GVGAE-$\text{C}^{2}$). In particular, it employs multiple view-specific Variational Graph AutoEncoders (VGAEs) as experts to capture the latent features from their own views. Furthermore, we design a fine-grained structure-aware gating network, which dynamically computes sample-level weights based on the proposed structure-aware quality evaluation on each expert, thus facilitating competition among experts. Meanwhile, each expert is trained not only to study its assigned view's specificity features, but also explicitly encouraged to learn consensus-aware features across views. Extensive multi-view clustering experiments on benchmark datasets reveal that GVGAE-$\text{C}^{2}$ significantly outperforms state-of-the-art methods.
