IJCNLP-AACL 2025

December 21, 2025

Mumbai, India

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

keywords:

code generation and understanding

automatic evaluation of datasets

evaluation methodologies

llm

prompting

benchmarking

evaluation

State‐of‐the‐art Large Language Models (LLMs) achieve high pass@1 on general benchmarks like HumanEval (Chen et al., 2021) but underperform on specialized suites such as ParEval (Nichols et al., 2024). Is this due to LLMs missing domain knowledge or insufficient prompt detail is given? To answer this, we introduce PartialOrderEval, which augments any code generation benchmark with a partial order of prompts from minimal to maximally detailed. Applying it to HumanEval and both serial and OpenMP subsets of ParEval, we measure how pass@1 scales with prompt specificity. Our experiments with Llama‐3.x and Qwen2.5‐Coder demonstrate varying degrees of prompt sensitivity across different tasks, and a qualitative analysis highlights explicit I/O specifications, edge‐case handling, and stepwise breakdowns as the key drivers of prompt detail improvement.

Downloads

SlidesTranscript English (automatic)

Next from IJCNLP-AACL 2025

An Analysis of the Impact of Problem Paraphrasing on LLM-Based Mathematical Problem Solving

An Analysis of the Impact of Problem Paraphrasing on LLM-Based Mathematical Problem Solving

IJCNLP-AACL 2025

+1
Yerim Han and 3 other authors

21 December 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved