Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
keywords:
unimodal model alignment
graph model
language model
Chemical molecules can be represented as graphs or as language descriptions. Training unimodal models on graphs results in different encodings than training them on language. Therefore, the existing literature force-aligns the unimodal models during training to use them in downstream applications such as drug discovery. But to what extent are \textit{graph} and \textit{language} unimodal model representations inherently aligned, i.e., aligned prior to any force-alignment training? Knowing this is useful for a more expedient and effective forced-alignment. For the first time, we explore methods to gauge the alignment of graph and language unimodal models. We find compelling differences between models and their ability to represent slight structural differences without force-alignment. We also present an \underline{u}nified \underline{u}nimodal \underline{a}lignment (\textbf{U2A}) benchmark for gauging the inherent alignment between graph and language encoders which we make available with this paper\footnote{GitHub link: \href{https://github.com/caocongfeng/U2A.git}{U2A Benchmark Repository}}.