Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Large Language Models (LLMs) have shown impressive capabilities across various text generation tasks; however, their potential for simple yet essential text classification remains underexplored, as LLM pre-training tends to emphasize generation over classification. While LLMs with instruction tuning can transform classification into a generation task, they struggle to categorize nuanced texts. One such example is text revision, which involves nuanced changes between pairs of texts. While simply fine-tuning LLMs for revision classification seems plausible, it requires a large amount of revision annotations, which are expensive and scarce. To address this issue, we introduce a plug-and-play parameter-efficient fine-tuning (PEFT) framework, named IR-Tuning, which only fine-tunes a subset of important LLM layers while freezing those of redundant ones. IR-Tuning improves fine-tuning convergence, reduces memory consumption, and is effective for small corpora. Experiments suggest that our proposed method can surpass multiple PEFT baselines over diverse revisions.