
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.
Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
VIDEO DOI: https://doi.org/10.48448/de8j-0r84
workshop paper
SMASH at StanceEval 2024: Prompt Engineering LLMs for Arabic Stance Detection
keywords:
stanceeval
smash
prompt engineering
llms
This paper presents our submission for the Stance Detection in Arabic Language (StanceEval) 2024 shared task conducted by Team SMASH of the University of Edinburgh. We evaluated the performance of various BERT-based and large language models (LLMs). MARBERT demonstrates superior performance among the BERT-based models, achieving F1 and macro-F1 scores of 0.570 and 0.770, respectively. In contrast, Command~R model outperforms all models with the highest overall F1 score of 0.661 and macro F1 score of 0.820.