Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
In multi-view multi-label (MVML) classification, each sample is represented by multiple heterogeneous views and annotated with multiple labels. Existing methods typically exploit pairwise semantic relationships to mine intra-view correlations and align inter-view features for generating structural representations. However, these methods ignore the direct expression of high-order semantic similarities and alignments from a group perspective, which necessitates multi-step aggregation for subsequent feature fusion, leading to the inefficient and incomplete integration of key semantic information. To overcome this limitation, we propose a novel hypergraph-based MVML method with Adaptive High-Order Semantic Fusion (HyperAHSF), which leverages hypergraphs to adaptively model group-level semantic similarities within each view and group-level semantic alignments across different views, enabling more effective feature fusion. Specifically, we first construct view-specific hyperedges by selecting multiple groups of node representations exhibiting high semantic similarity, which captures the group-level semantic similarities within each view, forming view-specific hypergraphs. Furthermore, we establish cross-view hyperedges to connect the multi-view node representations of each sample, which characterizes the group-level semantic alignments across different views, accordingly forming a unified multi-view hypergraph. Afterwards, we employ hypergraph neural networks to efficiently aggregate view-specific information and consensus information from their corresponding hypergraphs via group-level message passing. During the passing process, we impose a label-driven contrastive loss on the consensus information to encourage these representations to cluster toward their corresponding class prototypes, enhancing their discriminability. Finally, the consensus information together with the view-specific information is jointly integrated for multi-label classification. Extensive experiments demonstrate that HyperAHSF outperforms other state-of-the-art methods.
