Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/9g19-3v31

poster

ACL 2024

August 14, 2024

Bangkok, Thailand

Tox-BART: Leveraging Toxicity Attributes for Explanation Generation of Implicit Hate Speech

keywords:

toxbart

implications

stereotype

nlp4sg

explanations

explanation generation

hate speech

Employing language models to generate explanations for an incoming implicit hate post is an active area of research. The explanation is intended to make explicit the underlying stereotype and aid content moderators. The training often combines top-k relevant knowledge graph (KG) tuples to provide world knowledge and improve performance on standard metrics. Interestingly, our study presents conflicting evidence for the role of the quality of KG tuples in generating implicit explanations. Consequently, simpler models incorporating external toxicity signals outperform KG-infused models. Compared to the KG-based setup, we observe a comparable performance for SBIC (LatentHatred) datasets with a performance variation of +0.44 (+0.49), +1.83 (-1.56), and -4.59 (+0.77) in BLEU, ROUGE-L, and BERTScore. Further human evaluation and error analysis reveal that our proposed setup produces more precise explanations than zero-shot GPT-3.5, highlighting the intricate nature of the task.

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved