EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

As LLMs are increasingly used to simulate and augment collective decision-making, it is critical to examine how they align with human social reasoning. The key novel contribution of this paper is a method and study of alignment for collective outcomes (as opposed to the body of work on individual behavior alignment). We adapt a classical social psychology task, Lost at Sea to study how identity cues affect group leader election in a large-scale human experiment (N=748); we also simulate the participants with the Gemini, GPT, and Claude Large Language Models (LLMs). This reveals a critical insight: the tension between alignment for simulation vs alignment for an idealized outcome. Some models mirror people, where others mask our collective biases. Moreover, when identity cues are hidden, contrary to our human study, some models use identity to compensate for male-associated dialogue resulting in more gender biased outcomes. These results highlight that understanding when LLMs mirror or mask human behavior is critical to advancing socially-aligned AI.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

SWAN: An Efficient and Scalable Approach for Long-Context Language Modeling
poster

SWAN: An Efficient and Scalable Approach for Long-Context Language Modeling

EMNLP 2025

+8
Shantanu Acharya and 10 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved