Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Federated Graph Learning (FGL) has emerged as a powerful paradigm for decentralized training of graph neural networks while preserving data privacy. However, existing FGL methods are predominantly designed for static graphs and rely on parameter averaging or distribution alignment, which implicitly assume that all features are equally transferable across clients, overlooking both the spatial and temporal heterogeneity and the presence of client-specific knowledge in real-world graphs. In this paper, we identify that such assumptions create a vicious cycle of spurious representation entanglement, client-specific interference, and negative transfer, severely degrading generalization performance in Federated Learning over Dynamic Spatio-Temporal Graphs (FSTG). To address this fundamental issue, we propose a novel causality-inspired framework named \textbf{SC-FSGL}, which explicitly decouples transferable causal knowledge from client-specific noise through representation-level interventions. Specifically, we introduce a Conditional Separation Module that simulates soft interventions through client-conditioned masks, enabling the disentanglement of invariant spatio-temporal causal factors from spurious signals and mitigating representation entanglement caused by client heterogeneity. In addition, we propose a Causal Codebook that clusters causal prototypes and aligns local representations via contrastive learning, promoting cross-client consistency and facilitating knowledge sharing across diverse spatio-temporal patterns. Experiments on five diverse heterogeneity STG datasets show that SC-FSGL consistently outperforms state-of-the-art methods, demonstrating its ability to learn generalizable causal representations.
