Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Urban region embedding, which learns dense vector representations for urban zones, plays a foundational role in data-driven urban intelligence. These representations are critical for downstream applications like public safety management and infrastructure development, requiring nuanced understanding of urban functionality. A core challenge remains effective fusion of multi-view data (e.g., human mobility flows and static regional attributes) into unified zone representations. To this end, we propose \textbf{MVJC}, a \textbf{M}ulti-\textbf{v}iew \textbf{J}oint Learning and \textbf{C}ontrastive Learning framework, which employs: (1) Multi-view Joint Learning (MVJL) layer to model intra-view dependencies to extract view-specific features and (2) Multi-view Contrastive Learning (MVCL) layer to perform cross-region aggregation to derive consensus representations while capturing the regional complementarity. We further introduce a structure-aware contrastive loss that mitigates false negatives by aligning representations through region topology instead of instance identity. Extensive experiments on New York City datasets demonstrate MVJC's superiority: it reduces crime prediction MAE by 9.1\% (vs. 66.9 baseline) and improves land use clustering F-measure by 55.6\% (vs. 0.45 baseline) over state-of-the-art method, which is attributed to MVJC's synergy of joint and contrastive learning, yielding representations that are simultaneously predictive and semantically discriminative.
