AAAI 2026

January 22, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Human cognition excels at transcending sensory input and forming latent representations that structure our understanding of the world. Although Large Language Models (LLMs) can produce chain‐of‐thought reasoning, they lack a principled framework to capture latent structures and model uncertainty, especially in compositional reasoning tasks. In this work, we explore for the first time how to bridge LLMs with probabilistic graphical models (PGMs) to address LLM reasoning under uncertainty. To this end, we introduce Verbalized Probabilistic Graphical Modeling (vPGM), a LLM-based Bayesian framework that (i) guides LLMs in following key principles of PGMs through natural language and (ii) refines the resulting posterior distributions via numerical Bayesian inference. Unlike many traditional probabilistic methods requiring substantial domain expertise, vPGM bypasses expert‐driven model design, making it well‐suited for scenarios with limited assumptions. We evaluated our model on several compositional reasoning tasks, both close-ended and open-ended. Our results indicate that the model effectively enhances confidence calibration and text generation quality.

Downloads

SlidesPaperTranscript English (automatic)

Next from AAAI 2026

Formal Abductive Latent Explanations for Prototype-Based Networks
poster

Formal Abductive Latent Explanations for Prototype-Based Networks

AAAI 2026

+3Daniela Cancila
Daniela Cancila and 5 other authors

22 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved