AAAI 2026

January 22, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Currently, pretrained models are rapidly scaling in size, which substantially increases the cost of fine-tuning them for downstream tasks. To address this challenge, parameter-efficient fine-tuning (PEFT) methods have been developed to optimize a minimal set of parameters for adaptation. While current PEFT approaches predominantly employ an "additive'' strategy, introducing learnable modules into inputs or architectures, neglect the inherent knowledge embedded within pretrained models, which may be redundant or even conflict with downstream tasks. This limitation leads to increased inference latency and suboptimal transfer performance, particularly in scenarios with significant domain gaps. In this paper, we propose a Subtractive Fine-tuning Paradigm(SFP), which converts multiple redundant operations within the original module into a linear transformation to enhance inference speed and model performance. Specifically, we introduce a compact filter block to replace specific module with interference and redundancy in the original structure to reduce model conflicts. By using a pseudo inverse matrix to construct filter block, ensuring that it can inherit the knowledge of the replacement module, and then freezing the rest of the model, only fine-tuning the filter block is performed to eliminate interference and redundant knowledge, thereby enhancing the model’s adaptability to downstream tasks. Experimental results demonstrate that our SFP outperforms existing PEFT methods in accuracy while decreasing the overall model parameters by 12%. Compared to full fine-tuning, the accuracy has increased by 8.47%(74.04% vs. 65.57%, VTAB).

Downloads

Paper

Next from AAAI 2026

CoEvo: Continual Evolution of Symbolic Solutions Using Large Language Models
poster

CoEvo: Continual Evolution of Symbolic Solutions Using Large Language Models

AAAI 2026

Ping Guo and 2 other authors

22 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved