VIDEO DOI: https://doi.org/10.48448/90gv-3e95

findings / work in progress

EMNLP 2022

Abu Dhabi, United Arab Emirates

Analyzing the Limits of Self-Supervision in Handling Bias in Language

Please log in to leave a comment

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2022

AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
findings / work in progress

AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models

EMNLP 2022

Se Jung Kwon
Se Jung Kwon

30 November 2022

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved