Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Knowledge editing aims to update specific knowledge in Large Language Models (LLMs) without retraining the entire model. However, existing methods generally struggle to manage the ripple effects of knowledge updates, particularly in multi-hop reasoning tasks, where conflicts between old and new information often lead to shifts in reasoning chains and degraded consistency. To address this issue, a ripple-aware knowledge editing framework, namely \emph{EchoEdit}, is proposed. \emph{EchoEdit} introduces the RippleGraph to explicitly model potentially affected knowledge regions and employs a RippleRule generator to dynamically produce diffusion rules, precisely constraining knowledge propagation. Furthermore, we distill a Chain-of-Thought (CoT) planner from an external teacher model, which decouples complex reasoning chain planning into RippleGraph-guided reasoning, thereby alleviating the reasoning burden on low-resource LLMs in multi-hop tasks. Experimental results on the MQuAKE and RIPPLEEDITS multi-hop reasoning benchmarks demonstrate that \emph{EchoEdit} significantly outperforms existing mainstream methods, effectively enhancing post-edit reasoning consistency and generalization capabilities.