Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Significant efforts have been focused on enhancing the utilization of multiple node features and topological structures in multi-view graph learning through explicit model-based and implicit deep-based methodologies. The former excel in embedding prior knowledge in learning graphs, thereby offering the theory-level interpretability but limiting in the application-level flexibility because of manual parameter selection. In contrast, the latter leverage automatic differentiation mechanisms for learning graphs, providing greater application-level flexibility but suffering from reduced the theory-level interpretability due to their opaque nature. Motivated by these observations, we propose an interpretable deep unfolding network for mutual-benefit multi-view graph learning, aiming to combine the strengths of both approaches. First, we employ the ADMM optimizer to effectively solve the multi-view graph learning model with sparse and low-rank constraints, integrating this iterative solution of mathematical support into the construction of explicit-level deep unfolding networks, thereby enhancing the theory-level interpretability. Second, we convert certain conditions into implicit-level losses and leverage automatic differentiation capabilities to update parameters, thereby reducing the necessity for manual parameter tuning and enhancing the application-level flexibility. Through this collaboration, we optimize multi-view learning for a graph representation that balances interpretability and flexibility. Empirical evaluations across six diverse datasets demonstrate the effectiveness and superiority of the proposed method compared to state-of-the-art approaches.