VIDEO DOI: https://doi.org/10.48448/sb7g-bv94

poster

IJCNLP-AACL 2022

November 23, 2022

Taipei City, Taiwan

NepBERTa: Nepali Language Model Trained in a Large Corpus

Please log in to leave a comment

Downloads

Transcript English (automatic)

Next from IJCNLP-AACL 2022

Towards Simple and Efficient Task-Adaptive Pre-training for Text Classification
poster

Towards Simple and Efficient Task-Adaptive Pre-training for Text Classification

IJCNLP-AACL 2022

+2Raviraj JoshiSamiksha JagadaleArnav Ladkat
Arnav Ladkat and 4 other authors

23 November 2022

Similar lecture

Table-To-Text generation and pre-training with TabT5
workshop paper

Table-To-Text generation and pre-training with TabT5

NAACL 2022

Ewa Andrejczuk
Ewa Andrejczuk

14 July 2022

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved