ACL 2023

July 11, 2023

Toronto, Canada

FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning

DOI: 10.48448/ed9j-8z52

keywords:

fusion-in-decoder

in-context learning

few-shot learning

meta-learning

Large pre-trained models are capable of few-shot in-context learning (ICL), i.e., performing a new task by prepending a few demonstrations before the test input. However, the concatenated demonstrations are often excessively long and induce additional computation. Inspired by fusion-in-decoder (FiD) models which efficiently aggregate more passages and thus outperforms concatenation-based models in open-domain QA, we hypothesize that similar techniques can be applied to improve the efficiency and end-task performance of ICL. To verify this, we present a comprehensive study on applying three fusion methodsconcatenation-based (early fusion), FiD (intermediate), and ensemble-based (late)to ICL. We adopt a meta-learning setup where a model is first trained to perform ICL on a mixture of tasks using one selected fusion method, then evaluated on held-out tasks for ICL. Results on 11 held-out tasks show that FiD ICL matches or outperforms the other two fusion methods. Additionally, we show that FiD ICL (1) is 10x faster at inference time compared to concat-based and ensemble-based ICL, as we can easily pre-compute the representations of in-context examples and reuse them; (2) enables scaling up to meta-training 3B-sized models, which would fail for concat-based ICL.

Downloads

SlidesPaperTranscript English (automatic)

Next from ACL 2023

Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models

ACL 2023

Junmo Kang and 2 other authors

11 July 2023

Stay up to date with the latest Underline news!

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved