Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
keywords:
token-level explanations
stereotype detection
large language models
A stereotype is a generalised claim about a social group. Such claims change with culture and context and are often phrased in everyday language, which makes them hard to detect: the State of the Art Large Language Models (LLMs) reach only 68\% macro-F1 on the yes/no task “does this sentence contain a stereotype?”. We present HEARTS, a Holistic framework for Explainable, sustAinable and Robust Text Stereotype detection that brings together NLP and social-science. The framework is built on the Expanded Multi-Grain Stereotype Dataset (EMGSD), 57\,201 English sentences that cover gender, profession, nationality, race, religion and LGBTQ+ topics, adding 10\% more data for under-represented groups while keeping high annotator agreement ($\kappa = 0.82$). Fine-tuning the lightweight ALBERT-v2 model on EMGSD raises binary detection scores to 81.5\% macro-F1, matching full BERT while producing 200$\times$ less CO$_2$. For Explainability, we blend SHAP and LIME token level scores and introduce a confidence measure that increases when the model is correct ($\rho = 0.18$). We then use HEARTS to assess 16 SOTA LLMs on 1050 neutral prompts each for stereotype propagation: stereotype rates fall by 23\% between model generations, yet clear differences remain across model families (LLaMA $>$ Gemini $>$ GPT $>$ Claude). HEARTS thus supplies a practical, low-carbon and interpretable toolkit for measuring stereotype bias in language.