Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/8fc4-df38

poster

ACL 2024

August 14, 2024

Bangkok, Thailand

PRoLoRA: Partial Rotation Empowers More Parameter-Efficient LoRA

keywords:

low-resource methods for nlp

lora

parameter efficiency

parameter sharing

With the rapid scaling of large language models (LLMs), serving numerous low-rank adaptations (LoRAs) concurrently has become increasingly impractical, leading to unaffordable costs and necessitating more parameter-efficient finetuning methods. In this work, we introduce $\text{\textbf{P}artially \textbf{Ro}tation-enhanced \textbf{Lo}w-\textbf{R}ank \textbf{A}daptation (PRoLoRA)}$, an intra-layer sharing mechanism comprising four essential components: broadcast reduction, rotation enhancement, partially-sharing refinement, and rectified initialization strategy. As a superset of LoRA, PRoLoRA retains its advantages, and effectively circumvent the drawbacks of peer parameter-sharing methods with superior model capacity, practical feasibility, and broad applicability. Empirical experiments demonstrate the remarkably higher parameter efficiency of PRoLoRA in both specific parameter budget and performance target scenarios, and its scalability to larger LLMs. Notably, with one time less trainable parameters, PRoLoRA still outperforms LoRA on multiple instruction tuning datasets. Subsequently, an ablation study is conducted to validate the necessity of individual components and highlight the superiority of PRoLoRA over three potential variants. Hopefully, the conspicuously higher parameter efficiency can establish PRoLoRA as a resource-friendly alternative to LoRA.

Downloads

SlidesTranscript English (automatic)

Next from ACL 2024

AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning
poster

AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning

ACL 2024

+5Huajun ChenShuofei Qiao
Shuofei Qiao and 7 other authors

14 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved