EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Large Language Models (LLMs) have shown impressive capabilities across various text generation tasks; however, their potential for simple yet essential text classification remains underexplored, as LLM pre-training tends to emphasize generation over classification. While LLMs with instruction tuning can transform classification into a generation task, they struggle to categorize nuanced texts. One such example is text revision, which involves nuanced changes between pairs of texts. While simply fine-tuning LLMs for revision classification seems plausible, it requires a large amount of revision annotations, which are expensive and scarce. To address this issue, we introduce a plug-and-play parameter-efficient fine-tuning (PEFT) framework, named IR-Tuning, which only fine-tunes a subset of important LLM layers while freezing those of redundant ones. IR-Tuning improves fine-tuning convergence, reduces memory consumption, and is effective for small corpora. Experiments suggest that our proposed method can surpass multiple PEFT baselines over diverse revisions.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

EfficientXLang: Towards Improving Token Efficiency Through Cross-Lingual Reasoning
poster

EfficientXLang: Towards Improving Token Efficiency Through Cross-Lingual Reasoning

EMNLP 2025

Barun Patra
Sanchit Ahuja and 2 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved