Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Large Language Models often generate factually incorrect or outdated knowledge, prompting the emergence of model editing methods for precise knowledge updates. However, current mainstream locate-then-edit approaches exhibit a progressive performance decline during sequential editing, due to inadequate mechanisms for long-term knowledge preservation. To tackle this, we formulate the sequential editing problem as a constrained stochastic programming. Given the challenges posed by cumulative preservation error constraints and the gradually revealed editing tasks, we propose \textbf{LyapLock}. It integrates queuing theory and Lyapunov optimization to decompose the long-term constrained programming into tractable stepwise subproblems for efficient solving. This represents the first model editing framework with rigorous theoretical guarantees that maintains long-term knowledge preservation constraints while achieving asymptotically optimal editing performance. Experimental results show that our framework scales sequential editing capacity to 10,000 edits while stabilizing general capabilities and boosting average editing efficacy by 11.89% over SOTA baselines. Our code is released on https://anonymous.4open.science/r/LyapLock-7AC7.