EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

In the era of evaluating large language models (LLMs), data contamination has become an increasingly prominent concern. To address this risk, LLM benchmarking has evolved from a static to a dynamic paradigm. In this work, we conduct an in-depth analysis of existing static and dynamic benchmarks for evaluating LLMs. We first examine methods that enhance static benchmarks and identify their inherent limitations. We then highlight a critical gap—the lack of standardized criteria for evaluating dynamic benchmarks. Based on this observation, we propose a series of optimal design principles for dynamic benchmarking and analyze the limitations of existing dynamic benchmarks. This survey provides a concise yet comprehensive overview of recent advancements in data contamination research, offering valuable insights and a clear guide for future research efforts. We maintain a GitHub repository to continuously collect both static and dynamic benchmarking methods for LLMs. The repository can be found at this link.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

FinTrust: A Comprehensive Benchmark of Trustworthiness Evaluation in Finance Domain
poster

FinTrust: A Comprehensive Benchmark of Trustworthiness Evaluation in Finance Domain

EMNLP 2025

+3Tiansheng Hu
Liuyang Bai and 5 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved