AAAI 2026

January 22, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Learning Curve Extrapolation (LCE) is a critical technique for accelerating automated machine learning by terminating unpromising training runs early. Recent state-of-the-art methods have improved predictive accuracy by incorporating contextual information, such as neural network architecture. However, these approaches, whether context-agnostic or architecture-aware, still operate under the implicit assumption of a uniform task landscape. They overlook a pivotal, complementary factor: the intrinsic difficulty of the learning task itself. This oversight leads to a significant degradation in performance, especially for tasks whose learning dynamics diverge from the model's priors. In this work, we argue that task difficulty is a crucial yet neglected dimension for robust LCE. We introduce a novel framework, Difficulty-Adaptive Learning Curve Extrapolation (DA-LCE), which explicitly conditions its predictions on task complexity. Our core contributions are threefold: (1) We propose a transparent, {rule-based method} to quantify task difficulty from the early shape of learning curves, eliminating the need for external meta-features. (2) We design a novel data generation pipeline using a {conditional diffusion model} to create a high-fidelity, difficulty-conditioned synthetic prior for training. (3) We introduce a {Conditional Difficulty-aware PFN (CD-PFN)} that leverages this information to achieve superior predictive accuracy. Extensive experiments on a wide range of benchmarks demonstrate that our CD-PFN significantly outperforms both difficulty-agnostic baselines and even state-of-the-art architecture-aware models. This result highlights that task difficulty is a powerful, complementary source of information, whose impact can be as significant as, or even greater than, that of the model architecture.

Downloads

SlidesPaper

Next from AAAI 2026

HardF-SNN: Hardware-Friendly Quantization for Spiking Neural Networks with Efficient Integer-Arithmetic-Only Inference
poster

HardF-SNN: Hardware-Friendly Quantization for Spiking Neural Networks with Efficient Integer-Arithmetic-Only Inference

AAAI 2026

+4
Wenyu Chen and 6 other authors

22 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved