Abstract / Description of output
On annotating multi-dialect Arabic datasets, it is common to randomly assign the samples across a pool of native Arabic speakers. Recent analyses recommended routing dialectal samples to native speakers of their respective dialects to build higher-quality datasets. However, automatically identifying the dialect of samples is hard. Moreover, the pool of annotators who are native speakers of specific Arabic dialects might be scarce. Arabic Level of Dialectness (ALDi) was recently introduced as a quantitative variable that measures how sentences diverge from Standard Arabic. On randomly assigning samples to annotators, we hypothesize that samples of higher ALDi scores are harder to label especially if they are written in dialects that the annotators do not speak. We test this by analyzing the relation between ALDi scores and the annotators' agreement, on 15 public datasets having raw individual sample annotations for various sentence-classification tasks. We find strong evidence supporting our hypothesis for 11 of them. Consequently, we recommend prioritizing routing samples of high ALDi scores to native speakers of each sample's dialect, for which the dialect could be automatically identified at higher accuracies.
Original language | English |
---|---|
Title of host publication | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics |
Publisher | ACL Anthology |
Publication status | Accepted/In press - 14 May 2024 |
Event | The 62nd Annual Meeting of the Association for Computational Linguistics - Bangkok, Thailand Duration: 11 Aug 2024 → 16 Aug 2024 https://2024.aclweb.org/ |
Conference
Conference | The 62nd Annual Meeting of the Association for Computational Linguistics |
---|---|
Abbreviated title | ACL 2024 |
Country/Territory | Thailand |
City | Bangkok |
Period | 11/08/24 → 16/08/24 |
Internet address |