EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Multilabel text classification is an essential task in NLP applications. Traditional methods require extensive labeled data and are limited to fixed label sets. Extracting labels by LLMs is more effective and universal, but incurs high computational costs. In this work, we introduce a distillation-based T5 generalist model for zero-shot MLTC and few-shot fine-tuning. Our model accommodates variable label sets with general domain-agnostic pertaining, while modeling dependency between labels. Experiments show that our approach outperforms baselines of similar size on three few-shot tasks. Our code is available at https://anonymous.4open.science/r/t5-multilabel-0C32/README.md

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Pre-trained Models Perform the Best When Token Distributions Follow Zipf’s Law
poster

Pre-trained Models Perform the Best When Token Distributions Follow Zipf’s Law

EMNLP 2025

Qingkai ZengMeng JiangYanjin He
Yanjin He and 2 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved