EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

This paper investigates compositionality in chemical language models (LLMs) by utilizing several chemical datasets to develop a benchmark that assesses these models' capabilities. We modify the dataset to generate compositional questions that reflect intricate chemical structures and reactions, thereby testing the models' understanding of chemical language. Our approach focuses on identifying and analyzing compositional patterns within chemical data, allowing us to evaluate how well existing LLMs can handle complex queries. We conduct extensive experiments on several state-of-the-art chemical LLMs, revealing their strengths and weaknesses in compositional reasoning. By creating and sharing this benchmark, we aim to enhance the development of more capable chemical LLMs and provide a resource for future research on compositionality in chemical understanding. This work contributes to the advancement of efficient AI systems for chemical analysis and synthesis, paving the way for more sophisticated applications in the field.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

GTA: Supervised-Guided Reinforcement Learning for Text Classification with Large Language Models
poster

GTA: Supervised-Guided Reinforcement Learning for Text Classification with Large Language Models

EMNLP 2025

+4
Xiaoxin Chen and 6 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved