EMNLP 2025

November 08, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

This paper describes our submissions to the TSAR 2025 Shared Task on Readability-Controlled Text Simplification. We present a comparative study of three architectures: a minimal rule-based baseline, an expert-enhanced system, and a multi-stage generative pipeline using a T5 model in a zero-shot setting. Because per-instance official scores were not available at the time of analysis, we perform a principled sensitivity analysis via simulated paired bootstrap to assess robustness of our comparative claims. Under a wide range of reasonable assumptions the simpler, more constrained systems show substantially better automatic scores for semantic fidelity and the composite AUTORANK metric. We include diagnostic failure analysis grounded in actual system outputs, discuss limitations of embedding-based guardrails, and provide concise reproducibility notes in the Appendix. Full code, experimental configurations, and outputs will be released upon acceptance to ensure complete reproducibility.

Downloads

Transcript English (automatic)

Next from EMNLP 2025

OUNLP at TSAR 2025 Shared Task: AI-Generated Multi-Round Sentence Simplifier
workshop paper

OUNLP at TSAR 2025 Shared Task: AI-Generated Multi-Round Sentence Simplifier

EMNLP 2025

Cuong Huynh,
Jie Cao and 1 other author

08 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved