EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Conventional transformer models typically compress the information from all tokens in a sequence into a single textttCLS token to represent global context— an approach that can lead to information loss in tasks requiring localized or hierarchical cues. In this work, we introduce textitInceptive Transformer, a modular and lightweight architecture that enriches transformer-based token representations by integrating a multi-scale feature extraction module inspired by inception networks. Our model is designed to balance local and global dependencies by dynamically weighting tokens based on their relevance to a particular task. Evaluation across a diverse range of tasks including emotion recognition (both English and Bangla), irony detection, disease identification, and anti-COVID vaccine tweets classification shows that our models consistently outperform the baselines by 1% to 14% while maintaining efficiency. These findings highlight the versatility and cross-lingual applicability of our method for enriching transformer-based representations across diverse domains.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

MrGuard: A Multilingual Reasoning Guardrail for Universal LLM Safety
poster

MrGuard: A Multilingual Reasoning Guardrail for Universal LLM Safety

EMNLP 2025

+2Shuo LiSoham Dan
Soham Dan and 4 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved