Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Unsupervised representation learning on hypergraphs has recently drawn increasing attention due to its ability to capture high-order relationships without requiring labeled data. However, existing hypergraph contrastive learning methods predominantly follow spatial-based paradigms that rely on message-passing frameworks, which largely emphasize low-pass filtering. This restricts their ability to adapt to the diverse spectral characteristics of real-world hypergraphs. Motivated by the observation that different hypergraph datasets exhibit varied frequency energy distributions, we propose HyperAim, a novel contrastive learning framework that incorporates adaptive multi-frequency filtering into hypergraph representation learning. HyperAim integrates three complementary channels: a low-pass spatial channel, a high-pass spatial channel, and a spectral channel based on framelet transforms that jointly capture multi-frequency components. To fully exploit these diverse views, we introduce a frequency-aware contrastive learning strategy that constructs perturbed views via spectral and structural augmentations and enforces consistency across representations through inter- and intra-channel objectives. Extensive experiments on multiple benchmark datasets demonstrate that HyperAim consistently outperforms state-of-the-art baselines. Ablation studies further verify the effectiveness of adaptive frequency decomposition and frequency-aware contrastive learning in enhancing hypergraph representations.
