Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Higher-order Graph Neural Networks (HOGNNs), particularly those based on the 2-Folklore Weisfeiler-Leman (2-FWL) framework, achieve superior expressive power by modeling 2- and 3-node interactions, enabling fine-grained capture of graph structure beyond standard GNNs. However, their $\mathcal{O}(n^3)$ computational and memory complexity limits scalability, prompting efficiency-focused methods that often sacrifice expressivity. We propose Co-Sparsify, a connectivity-aware sparsification framework that removes only computation redundant for expressivity, preserving full 2-FWL discriminative power while significantly improving efficiency. Our key insight is that 3-node interactions are necessary only within biconnected components---maximal subgraphs without cut nodes, where any two nodes are connected by at least two internally disjoint paths. Within such components, joint modeling of intermediate nodes provides additional expressive power. In contrast, when node pairs are connected only through a cut node or belong to different connected components, their structural relationships can be decomposed into 2-node interactions or captured globally via readout, rendering higher-order modeling redundant. By restricting 2-node message passing to connected components and 3-node interactions to biconnected components---using an efficient $O(n + m)$ block-cut tree decomposition---Co-Sparsify eliminates provably redundant computations without approximation or sampling. We prove that Co-Sparsified GNNs are as powerful as the 2-FWL test. Empirical evaluation on PPGN shows preserved or improved accuracy on synthetic substructure counting tasks and state-of-the-art performance on real-world graph benchmarks such as ZINC and QM9. This work demonstrates that high expressivity and efficiency can coexist through principled, topology-guided sparsification, offering a scalable path for powerful GNNs without compromising theoretical guarantees.
