Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/cfxm-pj37

workshop paper

ACL 2024

August 15, 2024

Bangkok, Thailand

LLaMA-Based Models for Aspect-Based Sentiment Analysis

keywords:

fine-tuning large language models

lora

quantization

large language models

aspect-based sentiment analysis

While large language models (LLMs) show promise for various tasks, their performance in compound aspect-based sentiment analysis (ABSA) tasks lags behind fine-tuned models. However, the potential of LLMs fine-tuned for ABSA remains unexplored. This paper examines the capabilities of open-source LLMs fine-tuned for ABSA, focusing on LLaMA-based models. We evaluate the performance across four tasks and eight English datasets, finding that the fine-tuned Orca 2 model surpasses state-of-the-art results in all tasks. However, all models struggle in zero-shot and few-shot scenarios compared to fully fine-tuned ones. Additionally, we conduct error analysis to identify challenges faced by fine-tuned models.

Downloads

Transcript English (automatic)

Next from ACL 2024

TEII: Think, Explain, Interact and Iterate with Large Language Models toSolve Cross-lingual Emotion Detection"
workshop paper

TEII: Think, Explain, Interact and Iterate with Large Language Models toSolve Cross-lingual Emotion Detection"

ACL 2024

+2
Long Cheng and 4 other authors

15 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved