Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Dynamic graphs are common in real‑world systems such as social media, recommender systems, and traffic networks. Existing dynamic graph models for link prediction often fall short in capturing the full complexity of temporal evolution. They tend to overlook fine‑grained variations in interaction order, struggle with dependencies that span long time horizons, and provide limited modeling of pair‑specific relational dynamics. To address those challenges, we propose Graph2Video, a video‑inspired framework that views the temporal neighborhood of a target link as a sequence of “graph frames”. By stacking temporally ordered subgraph frames into a “graph video”, Graph2Video leverages the inductive biases of video foundation models to capture both fine-grained local variations and long-range temporal dynamics. It generates a link-level embedding that serves as a lightweight, plug-and-play, link-centric memory unit. This embedding integrates seamlessly into existing dynamic graph encoders, effectively addressing the limitations of prior approaches. Extensive experiments on benchmark datasets show that Graph2Video outperforms state‑of‑the‑art baselines in the link prediction task on most cases. The results highlight that borrowing spatio‑temporal modeling techniques from computer vision provides a principled and effective avenue for advancing dynamic graph learning.