EMNLP 2025

November 09, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

We present our submission to Task 3 (Discourse Relation Classification) of the DISRPT 2025 shared task. Task 3 introduces a unified set of 17 discourse relation labels across 39 corpora in 16 languages and six discourse frameworks, posing significant multilingual and cross‑formalism challenges. We first benchmark the task by fine‑tuning multilingual BERT‑based models (mBERT, XLM‑RoBERTa‑Base, and XLM‑RoBERTa‑Large) with two argument‑ordering strategies and progressive unfreezing ratios to establish strong baselines. We then evaluate prompt‑based large language models (namely Claude Opus 4.0) in zero‑shot and few‑shot settings to understand how LLMs respond to the newly proposed unified labels. Finally, we introduce HiDAC, a Hierarchical Dual‑Adapter Contrastive learning model. Results show that while larger transformer models achieve higher accuracy, the improvements are modest, and that unfreezing the top 75% of encoder layers yields performance comparable to full fine‑tuning while training far fewer parameters. Prompt‑based models lag significantly behind fine‑tuned transformers, and HiDAC achieves the highest overall accuracy (67.5%) while remaining more parameter‑efficient than full fine‑tuning.

Next from EMNLP 2025

DeDisCo at the DISRPT 2025 Shared Task: A System for Discourse Relation Classification
workshop paper

DeDisCo at the DISRPT 2025 Shared Task: A System for Discourse Relation Classification

EMNLP 2025

09 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved