EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Large Language Models (LLMs) are increasingly applied to socially grounded tasks, yet their ability to mirror human behavior in emotionally and strategically complex contexts remains unclear. This study assesses the behavioral fidelity of personality-prompted LLMs in adversarial dispute resolution by simulating multi-turn negotiation dialogues. Each LLM is guided by a matched Five-Factor personality profile to control for individual variation and enhance realism. We evaluate alignment across three dimensions: linguistic style, emotional expression, and strategic behavior. GPT-4.1 aligns most closely with humans in language and emotion, while Claude-3.7-Sonnet best reflects strategic behavior. Despite these strengths, notable gaps remain. Our findings provide a benchmark for LLM-human alignment in socially complex interactions and highlight both the promise and limitations of personality conditioning in dialogue modeling.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Linear-Time Demonstration Selection for In-Context Learning via Gradient Estimation
poster

Linear-Time Demonstration Selection for In-Context Learning via Gradient Estimation

EMNLP 2025

+3Jennifer Dy
Jennifer Dy and 5 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved