Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/rs9d-mc96

poster

ACL 2024

August 13, 2024

Bangkok, Thailand

Explainability and Hate Speech: Structured Explanations Make Social Media Moderators Faster

keywords:

explainability

hate speech

social media

Content moderators play a key role in keeping the conversation on social media healthy. While the high volume of content they need to judge represents a bottleneck to the moderation pipeline, no studies have explored how models could support them to make faster decisions. There is, by now, a vast body of research into detecting hate speech, sometimes explicitly motivated by a desire to help improve content moderation, but published research using real content moderators is scarce. In this work we investigate the effect of explanations on the speed of real-world moderators. Our experiments show that while generic explanations do not affect their speed and are often ignored, structured explanations lower moderators' decision making time by 7.4%.

Downloads

SlidesTranscript English (automatic)

Next from ACL 2024

Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data?
poster

Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data?

ACL 2024

+2Yang FengQingkai Fang
Qingkai Fang and 4 other authors

13 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved