Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Low-Rank Adaptation (LoRA) has emerged as a powerful parameter-efficient fine-tuning (PEFT) method for adapting large language models to downstream tasks. While recent work integrates mixture-of-experts (MoE) mechanisms with multiple LoRA modules to handle multi-task or complex scenarios, existing approaches face two key limitations: restricted cross-expert knowledge sharing and subsequent expert homogenization. To address these challenges, we propose a novel diversity-regulated asymmetric LoRA decomposition framework for efficient complex-task adaptation, which enables flexible knowledge sharing through asymmetric expert decomposition and guarantees the expert diversity with a dual orthogonality regularization. Extensive experiments on eight public benchmarks, spanning both multi-task and single-task settings, demonstrate the superiority of our approach over existing methods.