VIDEO DOI: https://doi.org/10.48448/ttw0-w647

technical paper

EMNLP 2021

November 08, 2021

Live on Underline

Attention Weights in Transformer NMT Fail Aligning Words Between Sequences but Largely Explain Model Predictions

Please log in to leave a comment

Downloads

SlidesTranscript English (automatic)

Next from EMNLP 2021

Rethinking Why Intermediate-Task Fine-Tuning Works
technical paper

Rethinking Why Intermediate-Task Fine-Tuning Works

EMNLP 2021

Ting-Yun Chang
Ting-Yun Chang and 1 other author

08 November 2021

Similar lecture

A Generative Framework for Simultaneous Machine Translation
technical paper

A Generative Framework for Simultaneous Machine Translation

EMNLP 2021

Lucia Specia
Yishu Miao and 2 other authors

08 November 2021

Stay up to date with the latest Underline news!

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved