Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Large language models (LLMs) are transforming the field of natural language processing, yet their development remains concentrated on a handful of high-resource languages, raising fundamental questions of inclusivity, trust, and global accessibility. My research addresses these challenges by advancing multilingual and trustworthy AI. On the multilingual front, I have analyzed how LLMs internally process diverse languages, introduced benchmarks such as M3Exam and SeaBench to reveal performance gaps, and led large-scale open-source initiatives including SeaLLMs and Babel that extend strong model support to underrepresented languages worldwide. Complementing inclusivity, my work also uncovers vulnerabilities in LLMs (e.g., multilingual jailbreaks) and introduces neuron-level interpretability and automated evaluation frameworks (e.g., Auto-Arena) for trustworthy deployment. Looking ahead, I aim to build AI systems that are linguistically inclusive, culturally aware, and inherently safe, bridging foundational advances with real-world applications in diverse global contexts.
