VIDEO DOI: https://doi.org/10.48448/sqvk-y475

technical paper

IJCNLP-AACL 2021

August 03, 2021

Thailand

What Context Features Can Transformer Language Models Use?

Please log in to leave a comment

Downloads

SlidesTranscript English (automatic)

Next from IJCNLP-AACL 2021

Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment
technical paper

Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment

IJCNLP-AACL 2021

Freda Shi
Freda Shi and 2 other authors

03 August 2021

Similar lecture

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
technical paper

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models

IJCNLP-AACL 2021

Ulme Wennberg
Ulme Wennberg

03 August 2021

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved