IJCNLP-AACL 2025

December 21, 2025

Mumbai, India

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

keywords:

unimodal model alignment

graph model

language model

Chemical molecules can be represented as graphs or as language descriptions. Training unimodal models on graphs results in different encodings than training them on language. Therefore, the existing literature force-aligns the unimodal models during training to use them in downstream applications such as drug discovery. But to what extent are \textit{graph} and \textit{language} unimodal model representations inherently aligned, i.e., aligned prior to any force-alignment training? Knowing this is useful for a more expedient and effective forced-alignment. For the first time, we explore methods to gauge the alignment of graph and language unimodal models. We find compelling differences between models and their ability to represent slight structural differences without force-alignment. We also present an \underline{u}nified \underline{u}nimodal \underline{a}lignment (\textbf{U2A}) benchmark for gauging the inherent alignment between graph and language encoders which we make available with this paper\footnote{GitHub link: \href{https://github.com/caocongfeng/U2A.git}{U2A Benchmark Repository}}.

Downloads

SlidesPaperTranscript English (automatic)

Next from IJCNLP-AACL 2025

SEAGraph: Unveiling the Whole Story of Paper Review Comments

SEAGraph: Unveiling the Whole Story of Paper Review Comments

IJCNLP-AACL 2025

+7
Yao Cheng and 9 other authors

21 December 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved