Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/pt9p-8766

keynote

ACL 2024

August 14, 2024

Bangkok, Thailand

Are LLMs Narrowing Our Horizon? Let’s Embrace Variation in NLP!

NLP research made significant progress, and our community’s achievements are becoming deeply integrated in society. The recent paradigm shift due to rapid advances in Large Language Models (LLMs) offers immense potential, but also led NLP to become more homogeneous. In this talk, I will argue for the importance of embracing variation in research, which will lead to more innovation, and in turn, trust. I will give an overview of current challenges and show how they led to the loss of trust in our models. To counter this, I propose to embrace variation in three key areas: inputs to models, outputs of models and research itself. Embracing variation holistically will be crucial to move our field towards more trustworthy human-facing NLP.

Downloads

Transcript English (automatic)

Next from ACL 2024

ProtT3: Protein-to-Text Generation for Text-based Protein Understanding
poster

ProtT3: Protein-to-Text Generation for Text-based Protein Understanding

ACL 2024

+4Hao Fei
Zhiyuan Liu and 6 other authors

14 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved