VIDEO DOI: https://doi.org/10.48448/54g5-6w32

findings / work in progress

EMNLP 2022

Abu Dhabi, United Arab Emirates

Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2022

Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
findings / work in progress

Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging

EMNLP 2022

Peng Lu
Peng Lu

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved