Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background

poster

ACL 2024

August 12, 2024

Bangkok, Thailand

Fine-Tuning ASR models for Very Low-Resource Languages: A Study on Mvskoke

keywords:

endangered languages

low-resource languages

speech

Recent advancements in multilingual models for automatic speech recognition (ASR) have been able to achieve a high accuracy for languages with extremely limited resources. This study examines ASR modeling for the Mvskoke language, an indigenous language of America. The parameter efficiency of adapter training is contrasted with training entire models, and it is demonstrated how performance varies with different amounts of data. Additionally, the models are evaluated with trigram language model decoding, and the outputs are compared across different types of speech recordings. Results show that training an adapter is both parameter efficient and gives higher accuracy for a relatively small amount of data.

Next from ACL 2024

Label-Aware Automatic Verbalizer for Few-Shot Text Classification in Mid-To-Low Resource Languages
poster

Label-Aware Automatic Verbalizer for Few-Shot Text Classification in Mid-To-Low Resource Languages

ACL 2024

Thanakorn Thaminkaew and 2 other authors

12 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved