
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.

workshop paper
ASOS at KSAA-CAD 2024: One Embedding is All You Need for Your Dictionary
keywords:
llm
arabic
nlp
deep learning
bert
Semantic search tasks have grown extremely fast following the advancements in large language models, including the Reverse Dictionary and Word Sense Disambiguation in Arabic. This paper describes our participation in the Contemporary Arabic Dictionary Shared Task. We propose two models that achieved first place in both tasks. We conducted comprehensive experiments on the latest five multilingual sentence transformers and the Arabic BERT model for semantic embedding extraction. We achieved a ranking score of 0.06 for the reverse dictionary task, which is double than last year's winner. We had an accuracy score of 0.268 for the Word Sense Disambiguation task.