Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Adolescents struggle to understand bias and representation in AI, particularly the concept of how datasets used in machine learning can be representative of populations or not. Within early experiments in teaching about investigating bias in AI systems using the Developing AI Literacy (DAILy) curriculum, we observed participating youth struggling to understand what it meant to be represented in the output of AI tools. For example, when using Google Image Search with prompts such as “physicist” and “outdoor recreation,” participating youth did not understand the question, “Are you represented in this outcome?” We saw an opportunity to address this challenge using the Kapor Foundation's Responsible AI and Tech Justice Guide. Drawing insights from three of the six core components of the framework presented in the guide, we developed Games of Representation (GR), a series of three card-based activities using SET game cards to teach concepts of population, sample, dataset, and representation. Through game play, players manipulate datasets, role-play as stakeholders with competing interests, and explore real-world scenarios where representation matters. The GR games and corresponding guide provide educators, families, and care providers with flexible, hands-on activities for guided, playful conversations with adolescents about AI ethics. This work contributes practical resources for K-12 AI education while addressing critical gaps in youth understanding of statistical bias and stakeholder influence in AI development.
