
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.

keynote
Does In-Context-Learning Offer the Best Tradeoff in Accuracy, Robustness, and Efficiency for Model Adaptation?
Adapting a model trained on vast amounts of data to new tasks with limited labeled data has long been a challenging problem, and over the years, a diverse range of techniques have been explored. Effective model adaptation requires achieving high accuracy through task-specific specialization without forgetting previous learnings, robustly handling the high variance from limited task-relevant supervision, and doing so efficiently with minimal compute and memory overheads. Recently, large language models (LLMs) have demonstrated remarkable ease of adaptation to new tasks with just a few examples provided in context, without any explicit training for such a capability. Puzzled by this apparent success, many researchers have sought to explain why in-context learning (ICL) works, but we still have only an incomplete understanding. In this talk, we examine this emerging phenomenon and assess its potential to meet our longstanding model adaptation goals in terms of accuracy, robustness, and efficiency.