VIDEO DOI: https://doi.org/10.48448/cyqn-wp51

technical paper

NAACL 2021

June 07, 2021

Live on Underline

An Architecture for Accelerated Large-Scale Inference of Transformer-Based Language Models

Please log in to leave a comment

Downloads

PaperTranscript English (automatic)

Next from NAACL 2021

LightSeq: A High Performance Inference Library for Transformers
technical paper

LightSeq: A High Performance Inference Library for Transformers

NAACL 2021

Ying Xiong
Ying Xiong

07 June 2021

Similar lecture

Efficient Attentions for Long Document Summarization
technical paper

Efficient Attentions for Long Document Summarization

NAACL 2021

07 June 2021

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved