Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Assessing the originality of creative ideas often relies on their statistical infrequency within a population---an approach long used in creativity research but difficult to automate at scale. Human annotation via manual bucketing of idea rephrasings is labor-intensive, subjective, and brittle under large corpora. We introduce a fully automated, psychometrically validated pipeline for frequency-based originality scoring. Our method, MuseRAG, combines large language models (LLMs) with an externally orchestrated retrieval-augmented generation (RAG) framework. Given a new idea, the system retrieves semantically similar prior idea buckets and zero-shot prompts the LLM to judge whether the new idea belongs to an existing bucket or forms a new one. The resulting buckets enable computation of frequency-based originality metrics. MuseRAG matches human annotators in both idea clustering (AMI = 0.59) and participant-level originality scores (r = 0.89), while exhibiting strong convergent and external validity. Our work enables intent-sensitive, human-aligned originality scoring, aiding creativity research at scale.