VIDEO DOI: https://doi.org/10.48448/qtkd-7x63

technical paper

IJCNLP-AACL 2021

August 02, 2021

Thailand

H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from IJCNLP-AACL 2021

CogAlign: Learning to Align Textual Neural Representations to Cognitive Language Processing Signals
technical paper

CogAlign: Learning to Align Textual Neural Representations to Cognitive Language Processing Signals

IJCNLP-AACL 2021

Deyi XiongYuqi Ren
Yuqi Ren and 1 other author

02 August 2021

Similar lecture

On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers
technical paper

On the Distribution, Sparsity, and Inference-time Quantization of Attention Values in Transformers

IJCNLP-AACL 2021

+3Niranjan BalasubramanianTianchu Ji
Tianchu Ji and 5 other authors

02 August 2021

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved