Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Large Language Models (LLMs) have shown promise in simulating human social behavior and interactions, yet lack large-scale, systematically constructed benchmarks for evaluating their alignment with real-world social tendencies. To address this gap, we introduce \textbf{SocioBench}, a comprehensive, cross-national, and cross-topic sociological benchmark derived from the \textit{International Social Survey Programme }(ISSP). It encompasses over 480,000 real respondent records from more than 30 countries, spanning 10 sociological issues, and over 40 detailed demographic attributes. Our experiments reveal notable variations in the accuracy and systematic biases in the responses of current mainstream open-source LLMs when simulating social attitudes across different topics and demographic groups.