Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Persona prompting is increasingly used in large language models (LLMs) to simulate the attitudes, values, and perspectives of various sociodemographic groups. However, different persona prompting strategies can significantly affect outcomes, raising concerns about the representativeness of such simulations. We systematically examine how different strategies for persona prompting, specifically role adoption formats and demographic priming strategies, influence LLM behavior across diverse identity groups. We evaluate five open-source LLMs for simulating 15 intersectional demographic groups across both open- and closed-ended tasks. Our findings show that LLMs struggle to simulate marginalized groups, particularly nonbinary, Hispanic, and Middle Eastern identities, exhibiting more stereotypes and lower alignment. However, prompting in an interview-style format and name-based priming consistently improve representativeness, and yield more diverse outputs. Surprisingly, larger models like Llama-3.3-70B perform worse than smaller ones, with OLMo-2-7B achieving the best results. Our findings offer actionable guidance for designing sociodemographic persona prompts in LLM-based simulation studies.