VIDEO DOI: https://doi.org/10.48448/e0s9-bw98

technical paper

NAACL 2022

July 11, 2022

Seattle, United States

KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation

Please log in to leave a comment

Downloads

PaperTranscript English (automatic)

Next from NAACL 2022

Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting
technical paper

Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting

NAACL 2022

Qingfeng Sun
Qingfeng Sun

11 July 2022

Similar lecture

Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
poster

Compressing Large-Scale Transformer-Based Models: A Case Study on BERT

ACL 2022

+6
Prakhar Ganesh and 8 other authors

23 May 2022

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved