Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
The transfer of knowledge from large-scale pre-trained models to diverse downstream tasks has achieved remarkable success. Beyond the traditional full fine-tuning paradigm, Parameter-Efficient Fine-Tuning (PEFT) has emerged as a more efficient model adaptation approach. However, applying existing PEFT methods to adapt dense vision models, particularly in multi-task settings, remains inadequately explored due to their low efficiency, limited task scalability, and neglect of cross-task fine-tuning interactions. To address these challenges, we propose the Task Dynamic-Synergistic Skill Adaptation, termed TDSS, an efficient and scalable multi-task model adaptation framework for dense visual predictions. TDSS comprises two key components: Task-Dynamic Skill Adapters (TDSA) and Task-Synergistic Adaptation Interaction (TSAI). Specifically, TDSA are inserted in parallel into pre-trained vision models to extract task-specific adapted features through the construction of skill representation experts and task dynamic gating. TSAI is developed to enhance cross-task adaptation interaction by bridging global generic and task-specific adapted features. Extensive experiments on multi-task dense visual predictions demonstrate that TDSS surpasses existing state-of-the-art parameter-efficient fine-tuning methods, while exhibiting remarkable efficiency and scalability in parameters and computational complexity.
