Content not yet available

This lecture has no active video or poster.

AAAI 2026

January 25, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Recently, structure–text contrastive learning has shown promising performance in text-attributed graph representation by leveraging complementary strengths of graph neural networks and language models. However, existing methods typically rely on homophily assumptions in similarity estimation and hard optimization objectives, leading to inherent limitations when applied to heterophilic graphs. Although some works attempt to mitigate heterophily through structural adjustments or neighbor aggregation, they usually treat textual embeddings as static alignment targets, resulting in suboptimal integration. To address these challenges, we propose a novel framework called GCL-OT: Graph Contrastive Learning with Optimal Transport for Heterophilic Text-Attributed Graphs, enabling flexible and bidirectional alignment between structural and textual signals. Specifically, GCL-OT decomposes heterophily into complete heterophily, partial homophily, and latent homophily, each addressed with tailored optimization mechanisms. For partial heterophily, we design a RealSoftMax-based similarity estimation mechanism to selectively emphasize key neighbor-word interactions while suppressing background noise. For complete heterophily, we introduce a prompt filtering mechanism that adaptively excludes irrelevant noise during optimal transport alignment. Furthermore, we incorporate OT-guided soft supervision to uncover latent neighbors with similarity semantic, enhancing the learning of latent homophily. Extensive experiments on 9 benchmark datasets show that GCL-OT consistently outperforms state-of-the-art methods, verifying its effectiveness and robustness.

Downloads

Paper

Next from AAAI 2026

Multi-granularity Temporal Knowledge Editing over Large Language Models
poster

Multi-granularity Temporal Knowledge Editing over Large Language Models

AAAI 2026

+3Zhen Tan
Huyan Li and 5 other authors

25 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved