VIDEO DOI: https://doi.org/10.48448/jcq7-rf77

technical paper

IJCNLP-AACL 2021

August 02, 2021

Thailand

BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?

Please log in to leave a comment

Downloads

SlidesTranscript English (automatic)

Next from IJCNLP-AACL 2021

Superbizarre Is Not Superb: Derivational Morphology Improves BERT's Interpretation of Complex Words
technical paper

Superbizarre Is Not Superb: Derivational Morphology Improves BERT's Interpretation of Complex Words

IJCNLP-AACL 2021

Hinrich SchutzeJanet B. PierrehumbertValentin Hofmann
Valentin Hofmann and 2 other authors

02 August 2021

Similar lecture

Distilling Relation Embeddings from Pre-trained Language Models
workshop paper

Distilling Relation Embeddings from Pre-trained Language Models

ACL 2022

Steven SchockaertJose Camacho-ColladosAsahi Ushio
Asahi Ushio and 2 other authors

27 May 2022

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved