Benchmarking Transformer-based Language Models for Arabic Sentiment and Sarcasm Detection

Ibrahim Abu Farha, Walid Magdy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The introduction of transformer-based language models has been a revolutionary step for natural language processing (NLP) research. These models, such as BERT, GPT and ELECTRA, led to state-of-the-art performance in many NLP tasks. Most of these models were initially developed for English and other languages followed later. Recently, several Arabic-specific models started emerging. However, there are limited direct comparisons between these models. In this paper, we evaluate the performance of 24 of these models on Arabic sentiment and sarcasm detection. Our results show that the models achieving the best performance are those that a retrained on only Arabic data, including dialectal Arabic, and use a larger number of parameters, such as the recently released MARBERT. However, we noticed that AraELECTRA is one of the top performing models while being much more efficient in its computational cost. Finally, the experiments on AraGPT2 variants showed low performance compared to BERT models, which indicates that it might not be suitable for classification tasks.
Original languageEnglish
Title of host publicationProceedings of the Sixth Arabic Natural Language Processing Workshop
PublisherAssociation for Computational Linguistics (ACL)
Pages21-31
Number of pages11
ISBN (Print)978-1-954085-09-1
Publication statusPublished - 19 Apr 2021
EventThe Sixth Arabic Natural Language Processing Workshop - Virtual
Duration: 19 Apr 202119 Apr 2021
https://sites.google.com/view/wanlp2021

Workshop

WorkshopThe Sixth Arabic Natural Language Processing Workshop
Abbreviated titleWANLP 2021
Period19/04/2119/04/21
Internet address

Fingerprint

Dive into the research topics of 'Benchmarking Transformer-based Language Models for Arabic Sentiment and Sarcasm Detection'. Together they form a unique fingerprint.

Cite this