technical paper
RECORDING - Large language models show human-like content biases in transmission chain experiments
keywords:
social transmission
content biases
ai
Abstract:
Research in cultural evolution demonstrates that humans have biases to attend to, remember, and transmit some types of content over others. These biases influence the evolution of diverse cultural phenomena across cultures, including folktales, religious myths, literature, film, and online misinformation. Given the widespread influence of these biases we can anticipate that they also influenced the training data of Large Language Models (LLMs). As the use of LLMs grows, it is important to examine whether they exhibit biases in their output. In five preregistered experiments using material from previous studies with human participants, we use a transmission chain-like methodology to test for content biases in the output of the LLM ChatGPT-3. We find that ChatGPT-3 shows biases analogous to humans for content that is gender-stereotype-consistent, social, negative, threat-related, and biologically counterintuitive, over other content. The presence of these biases in LLM output suggests that such content is widespread in its training data and could have consequential downstream effects, by magnifying pre-existing human tendencies for cognitively appealing and not necessarily informative, or valuable, content.
Speaker's social media:
Twitter: @JStubbersfield; Bluesky: @jstubbersfield.bsky.social