Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Large Language Models (LLMs) have greatly advanced knowledge graph question answering (KGQA), yet existing systems are typically optimized for returning highly relevant but predictable answers.
A missing yet desired capacity is to exploit LLMs to suggest surprise and novel (serendipitious'') answers.
In this paper, we formally define the serendipity-aware KGQA task and propose the SerenQA framework to evaluate LLMs' ability to uncover unexpected insights in scientific KGQA tasks.
SerenQA includes a rigorous serendipity metric based on relevance, novelty, and surprise, along with an expert-annotated benchmark derived from the Clinical Knowledge Graph, focused on drug repurposing.
Additionally, it features a structured evaluation pipeline encompassing three subtasks: knowledge retrieval, subgraph reasoning, and serendipity exploration.
Our experiments reveal that while state-of-the-art LLMs perform well on retrieval, they still struggle with identifying genuinely surprising and valuable discoveries, underscoring significant room for future improvements.
Our curated resources have been released as supplementary material.
