AAAI 2026

January 24, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Low-Rank Adaptation (LoRA) has emerged as a powerful parameter-efficient fine-tuning (PEFT) method for adapting large language models to downstream tasks. While recent work integrates mixture-of-experts (MoE) mechanisms with multiple LoRA modules to handle multi-task or complex scenarios, existing approaches face two key limitations: restricted cross-expert knowledge sharing and subsequent expert homogenization. To address these challenges, we propose a novel diversity-regulated asymmetric LoRA decomposition framework for efficient complex-task adaptation, which enables flexible knowledge sharing through asymmetric expert decomposition and guarantees the expert diversity with a dual orthogonality regularization. Extensive experiments on eight public benchmarks, spanning both multi-task and single-task settings, demonstrate the superiority of our approach over existing methods.

Downloads

Paper

Next from AAAI 2026

LookFlow: Training-Free and Efficient High-Resolution Image Synthesis via Dynamic Lookahead Guidance Flow
poster

LookFlow: Training-Free and Efficient High-Resolution Image Synthesis via Dynamic Lookahead Guidance Flow

AAAI 2026

+5
Jianlong Chang and 7 other authors

24 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved