Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Most commonsense reasoning models overlook the influence of personality traits, limiting their effectiveness in personalized systems such as dialogue generation. To address this limitation, we introduced the Personality-aware Commonsense Knowledge Graph (PCoKG), a structured dataset comprising $521,316$ quadruples. We began by employing three evaluators to score and filter events from the ATOMIC dataset, selecting those that are likely to elicit diverse reasoning patterns across different personality types. For knowledge graph construction, we leveraged the role-playing capabilities of large language models (LLMs) to perform reasoning tasks. To enhance the quality of the generated knowledge, we incorporated a debate mechanism consisting of a supporter, an opposer, and a judge, which iteratively refined the outputs through feedback loops. We evaluated the dataset from multiple perspectives and conducted fine-tuning and ablation experiments using multiple LLM backbones to assess PCoKG's robustness and the effectiveness of its construction pipeline. Our LoRA-based fine-tuning results indicated a positive correlation between model performance and the parameter scale of the base models. Finally, we applied PCoKG to persona-based dialogue generation, where it demonstrated improved consistency between generated responses and reference outputs. This work bridges the gap between commonsense reasoning and individual cognitive differences, enabling more personalized and context-aware AI systems.