Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Graph Neural Networks (GNNs) have achieved remarkable success in graph classification tasks. However, their performance often deteriorates under out-of-distribution (OOD) shifts, such as variations in graph structures, sizes, and node attributes. While several methods have been proposed to address this issue, a significant challenge remains in explicitly identifying the key components of a graph. To tackle this, we introduce CauVQ, a novel causal vector quantization method that enhances OOD generalization by explicitly identifying and leveraging causally relevant substructures. Our method first decomposes each graph into local subgraphs and quantizes them into a discrete codebook of prototypical substructures, enabling more stable and interpretable representations. To isolate truly causal substructures, we maximize their mutual information with graph labels and refine their representations through a learnable substructure interaction matrix and a causal attention mask, effectively suppressing spurious correlations. Furthermore, we design a counterfactual regularizer that enforces prediction stability under substructure perturbations, encouraging the model to focus on causal patterns rather than shortcuts. Extensive experiments on both standard and OOD graph classification benchmarks demonstrate that CauVQ consistently outperforms state-of-the-art methods in terms of robustness and interpretability. Our framework offers a promising step towards reliable and explainable graph learning under distribution shifts.
