Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Recently, structure–text contrastive learning has shown promising performance in text-attributed graph representation by leveraging complementary strengths of graph neural networks and language models. However, existing methods typically rely on homophily assumptions in similarity estimation and hard optimization objectives, leading to inherent limitations when applied to heterophilic graphs. Although some works attempt to mitigate heterophily through structural adjustments or neighbor aggregation, they usually treat textual embeddings as static alignment targets, resulting in suboptimal integration. To address these challenges, we propose a novel framework called GCL-OT: Graph Contrastive Learning with Optimal Transport for Heterophilic Text-Attributed Graphs, enabling flexible and bidirectional alignment between structural and textual signals. Specifically, GCL-OT decomposes heterophily into complete heterophily, partial homophily, and latent homophily, each addressed with tailored optimization mechanisms. For partial heterophily, we design a RealSoftMax-based similarity estimation mechanism to selectively emphasize key neighbor-word interactions while suppressing background noise. For complete heterophily, we introduce a prompt filtering mechanism that adaptively excludes irrelevant noise during optimal transport alignment. Furthermore, we incorporate OT-guided soft supervision to uncover latent neighbors with similarity semantic, enhancing the learning of latent homophily. Extensive experiments on 9 benchmark datasets show that GCL-OT consistently outperforms state-of-the-art methods, verifying its effectiveness and robustness.