Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background

workshop paper

ACL 2024

August 16, 2024

Bangkok, Thailand

Multilingual DAMA for Debiasing Translation

keywords:

model editing

debiasing

translation

Large language models recently became state-of-the-art solutions for machine translation across many language pairs. Similarly to previous approaches, LLMs are prone to gender bias, e.g., by better translating sentences mentioning men than women. To address this issue, we extend a robust Debiasing Algorithm through Model Adaptation (DAMA, Limisiewicz et al. 2024), previously used in language generation, to work in multilingual setting and translation task. The method decreases stereotypical bias with a slight to moderate decrease in the general domain. The method is still pending evaluation in the GeBNLP shared task, and the results will be updated when available.

Next from ACL 2024

Ask LLMs Directly, “What shapes your bias?”: Measuring Social Bias in Large Language Models
workshop paper

Ask LLMs Directly, “What shapes your bias?”: Measuring Social Bias in Large Language Models

ACL 2024

+2Soyeong Jeong
Jisu Shin and 4 other authors

16 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved