Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
This paper introduces an algorithm to select demonstration examples for in-context learning of a query set. Given a set of n examples, how can we quickly select k out of n to best serve as the conditioning for a downstream task? This problem has broad applications in prompt tuning and chain-of-thought reasoning, to name a few. Since model weights remain fixed during in-context learning, previous work has sought to design methods based on similarity scores measured in the input embeddings. This work proposes a new approach based on gradients of the model output taken in the input embedding space. Our approach estimates model outputs through a first-order approximation using the gradients. Then, we apply this estimation to multiple randomly sampled subsets. Finally, we aggregate the sampled subset outcomes to form an influence score for each demonstration, and select k most relevant examples. This procedure only requires pre-computing model outputs and gradients once, leading to a linear-time algorithm relative to model and training set sizes. Extensive experiments across various LLMs and datasets validate the efficiency of our approach. We show that the gradient estimation procedure yields approximations of full inference with less than 1\% error across six datasets. This allows us to scale up subset selection methods that would otherwise run full inference by up to 37.7times on LLMs with up to 34 billion parameters, and outperform existing selection methods based on input embeddings by 11\% on average.