Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/81qn-z715

workshop paper

ACL 2024

August 15, 2024

Bangkok, Thailand

KUL@SMM4H2024: Optimizing Text Classification with Quality-Assured Augmentation Strategies

keywords:

text cclassification

regularised dropout

lm data augmentation

This paper presents our models for the Social Media Mining for Health 2024 shared task, specifically Task 5, which involves classifying tweets reporting a child with childhood dis- orders (annotated as "1") versus those merely mentioning a disorder (annotated as "0"). We utilized a classification model enhanced with diverse textual and language model-based aug- mentations. To ensure quality, we used seman- tic similarity, perplexity, and lexical diversity as evaluation metrics. Combining supervised con- trastive learning and cross-entropy-based learn- ing, our best model, incorporating R-drop and various LM generation-based augmentations, achieved an impressive F1 score of 0.9230 on the test set, surpassing the task mean and me- dian scores.

Downloads

Transcript English (automatic)

Next from ACL 2024

LHS712NV at #SMM4H 2024 Task 4: Using BERT to classify Reddit posts on non-medical substance use
workshop paper

LHS712NV at #SMM4H 2024 Task 4: Using BERT to classify Reddit posts on non-medical substance use

ACL 2024

+1
Neha Nair and 3 other authors

15 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved