Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Graph Transformers (GTs), which simultaneously integrate message passing and self-attention mechanisms, have achieved promising empirical results in graph prediction tasks. However, the design of scalable and structure-aware node tokenization strategies has lagged behind other modalities. This gap becomes critical as the quadratic complexity of full attention renders GTs impractical on large-scale graphs. Recently, Spiking Neural Networks (SNNs), as brain-inspired models, provided an energy-saving scheme to convert changes of sequential input intensity into discrete spike-based representations through event-driven spiking neurons. Inspired by these characteristics, we propose GT-SNT, a linear-time Graph Transformer with Spiking Node Tokenization. By integrating random feature-based positional encoding with SNNs, the spiking node tokenizer extracts compact, structure-aware spike count embeddings as node tokens and mitigates the issue of codebook collapse. These tokens are used to dynamically reconstruct the codebook within a codebook-guided self-attention mechanism, enabling efficient global context aggregation with linear complexity. In experiments, we compare GT-SNT with other state-of-the-art baselines on node classification datasets ranging from small to large. Experimental results show that GT-SNT has achieved competitive performances on most datasets while maintaining up to 130× faster inference speed compared to other GTs.