EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

The Spiral of Silence (SoS) theory posits that, in human societies, fear of social isolation drives individuals holding a minority opinion to quieten down, allowing the majority opinion to dominate public discourse. When agents are large language models (LLMs) rather than humans, the classic affective explanation no longer applies because language models do not have emotions or social anxiety. Therefore, a fundamental question appears: Can purely statistical language generation mechanisms give rise to SOS dynamics in collectives of LLM agents? We introduce an evaluation framework based on rating sequences and design four controlled experimental conditions by varying the presence of persona configurations and historical interaction signals. To measure opinion dynamics, we employ concentration metrics, including Interquartile Range and Kurtosis, along with trend analysis methods such as the Mann-Kendall test and Spearman rank correlation coefficient. We experiment on six widely used open source models: DeepSeek-V2-Lite-Chat, Llama-3.1-8B-Instruct, Mistral-8B-Instruct-2410, and Qwen-2.5-Instruct series (1.5 B, 3 B, 7 B), covering cross-family comparisons on a similar scale and within-family scaling analyses for Qwen, and a close source model GPT-4o-mini. The results of the experiment indicate that \text{(i)} most of the models show a strong default bias in the absence of social signals; \text{(ii)} persona introduces opinion heterogeneity, while history exerts an anchoring force; and \text{(iii)} combining both signals self-reinforcing the majority opinion dominance appears much more frequent in the test cases than others, despite the lack of affect of the agents. These findings challenge traditional affect-based explanations of SoS and provide empirical evidence to understand and mitigate opinion convergence in LLM-based agent systems and offer a conceptual link between computational sociology and the design of responsible artificial intelligence systems.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Understanding GUI Agent Localization Biases through Logit Sharpness
poster

Understanding GUI Agent Localization Biases through Logit Sharpness

EMNLP 2025

+2
Yujun Cai and 4 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved