Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Large Language Models (LLMs) have demonstrated significant potential across various domains. However, they often struggle with integrating external knowledge and performing complex reasoning, leading to hallucinations and unreliable outputs. Retrieval Augmented Generation (RAG) has emerged as a promising paradigm to mitigate these issues by incorporating external knowledge. Yet, conventional RAG approaches—especially those based on vector similarity—fail to effectively handle relational structures and multi-step reasoning. In this work, we propose CogGRAG, a human cognition inspired, graph-based RAG framework designed for Knowledge Graph Question Answering (KGQA). CogGRAG mimics human reasoning through a three-stage process: (1) top-down problem decomposition via mind map construction; (2) structured retrieval of local and global knowledge from external Knowledge Graphs (KGs); and (3) bottom-up reasoning with self-verification. Unlike previous tree-based decomposition methods such as MindMap or Graph-CoT, CogGRAG unifies the entire reasoning process under a global mind map with early-stage, graph-structured retrieval and integrates dual-process verification to mitigate error propagation. Extensive experiments demonstrate that CogGRAG outperforms existing methods in both accuracy and reliability. We provide our code and data here: https://anonymous.4open.science/r/RAG-5883.