Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
While Graph Foundation Models (GFMs) have achieved notable progress across diverse tasks recently, their robustness under domain noise, structural perturbations, and even adversarial attacks remains largely underexplored. A core limitation lies in the inadequate modeling of hierarchical structural semantics, which are intrinsic priors and critical for generalization. In this work, we propose SA²GFM, a robust GFM framework that enhances the domain-adaptable representations through Structure-Aware Semantic Augmentation. First, to embed the hierarchical structural priors, we transform entropy-based encoding trees into structure-aware textual prompts for feature augmentation. The enriched inputs are processed by a novel self-supervised Information Bottleneck mechanism that distills the robust and transferable representations through structure-guided compression. To mitigate the negative transfer in cross-domain adaptation, we develop an expert adaptive routing mechanism that integrates a mixture-of-experts architecture with a null expert design. To enable efficient downstream adaptation, we propose a fine-tuning module that efficiently optimizes the hierarchical structures through the joint intra- and inter-community structure learning. Extensive experiments validate the superiority of SA²GFM over effectiveness and robustness against random noise and adversarial perturbations on node and graph classification compared with 9 state-of-the-art baselines.