AAAI 2026

January 25, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Large language models (LLMs) are transforming the field of natural language processing, yet their development remains concentrated on a handful of high-resource languages, raising fundamental questions of inclusivity, trust, and global accessibility. My research addresses these challenges by advancing multilingual and trustworthy AI. On the multilingual front, I have analyzed how LLMs internally process diverse languages, introduced benchmarks such as M3Exam and SeaBench to reveal performance gaps, and led large-scale open-source initiatives including SeaLLMs and Babel that extend strong model support to underrepresented languages worldwide. Complementing inclusivity, my work also uncovers vulnerabilities in LLMs (e.g., multilingual jailbreaks) and introduces neuron-level interpretability and automated evaluation frameworks (e.g., Auto-Arena) for trustworthy deployment. Looking ahead, I aim to build AI systems that are linguistically inclusive, culturally aware, and inherently safe, bridging foundational advances with real-world applications in diverse global contexts.

Downloads

SlidesPaperTranscript English (automatic)

Next from AAAI 2026

Generalizable Slum Detection from Satellite Imagery with Mixture-of-Experts
technical paper

Generalizable Slum Detection from Satellite Imagery with Mixture-of-Experts

AAAI 2026

+2Meeyoung Cha
Meeyoung Cha and 4 other authors

25 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved