Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background

poster

ACL 2024

August 12, 2024

Bangkok, Thailand

STEP: Staged Parameter-Efficient Pre-training for Large Language Models

keywords:

efficiency

pre-training

language models

Pre-training large language models faces significant memory challenges due to the large size of model weights. We propose STaged parameter-Efficient Pre-training (STEP), which combines ideas from parameter-efficient tuning and staged training. We conduct experiments on pre-training models of various sizes and demonstrate that STEP can achieve up to a 40.4\% reduction in maximum memory requirement compared to vanilla pre-training while maintaining comparable performance.

Next from ACL 2024

Unveiling Imitation Learning: Exploring the impact of Data Falsity to Large Language Model
poster

Unveiling Imitation Learning: Exploring the impact of Data Falsity to Large Language Model

ACL 2024

Hyunsoo Cho

12 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved