VIDEO DOI: https://doi.org/10.48448/6zkp-d931

workshop paper

EMNLP 2022

December 07, 2022

Abu Dhabi, United Arab Emirates

AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models

Please log in to leave a comment

Downloads

PaperTranscript English (automatic)

Next from EMNLP 2022

Towards Fair Dataset Distillation
workshop paper

Towards Fair Dataset Distillation

EMNLP 2022

Xudong Han
Xudong Han

07 December 2022

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved