Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/3axn-q354

poster

COLING 2022

October 12, 2022

Gyeongju , Korea, Republic of

A Study of Implicit Bias in Pretrained Language Models against People with Disabilities

Please log in to leave a comment

Downloads

PaperTranscript English (automatic)

Next from COLING 2022

Improving Continual Relation Extraction through Prototypical Contrastive Learning
technical paper

Improving Continual Relation Extraction through Prototypical Contrastive Learning

COLING 2022

Chengwei Hu
Chengwei Hu

12 October 2022

Similar lecture

ElitePLM: An Empirical Study on General Language Ability Evaluation of Pretrained Language Models
technical paper

ElitePLM: An Empirical Study on General Language Ability Evaluation of Pretrained Language Models

NAACL 2022

+6Junyi Li
Junyi Li and 8 other authors

11 July 2022

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved