EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Standardized benchmarks are central to evaluating and comparing model performance in Natural Language Processing (NLP). However, Large Language Models (LLMs) have exposed shortcomings in existing benchmarks, and so far there is no clear solution. In this paper, we survey a wide scope of benchmarking issues, and provide an overview of solutions as they are suggested in the literature. We observe that these solutions often tackle a limited number of issues, neglecting other facets. Therefore, we propose concrete checklists to cover all aspects of benchmarking issues, both for benchmark creation and usage. Additionally, we discuss the advantages of adding minimal-sized test-suites to benchmarking, ensuring downstream applicability on real-world use cases.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Easy as PIE? Identifying Multi-Word Expressions with LLMs
poster

Easy as PIE? Identifying Multi-Word Expressions with LLMs

EMNLP 2025

Ofri HefetzKai Golan Hashiloni
Kfir Bar and 2 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved