Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
With the increasing scale and complexity of graph data, node attributes are also becoming richer and more complex, spanning multi-view/modal features and informative text. Classic GNNs equipped with shallow encoders are no longer sufficient to handle such data independently, making model collaboration across different architectures an inevitable trend. Recently, the integration of Large Language Models (LLMs) and GNNs has attracted significant attention. However, the inherent disparity between these models remains a key challenge. Promising solutions have considered fine-tuning Small Language Models (SLMs) to bridge the gap between GNNs and frozen LLMs. Yet, this introduces another problem: large and small models bring complementary views of knowledge, but how to effectively integrate them and allow mutual refinement remains a significant research gap. To address these challenges, we introduce COLA, a collaborative large–small model framework that enables seamless cooperation among semantic LLMs, task-specific fine-tuned SLMs, and structure-aware GNNs. COLA features a unique Consensus–Complement Coordination (CoCo) mechanism, wherein the Mixture-of-Coordinators (MoC) architecturally aligns the LLM and SLM. Built upon MoC, a flexible graph-knowledge infusion strategy encourages the joint alignment and graph knowledge learning of textual representations. Extensive evaluations across nine diverse datasets demonstrate that COLA consistently achieves state-of-the-art performance, validating the effectiveness and generality of our collaborative paradigm.