Projects per year
Abstract
Content moderators play a key role in keeping the conversation on social media healthy. While the high volume of content they need to judge represents a bottleneck to the moderation pipeline, no studies have explored how models could support them to make faster decisions. There is, by now, a vast body of research into detecting hate speech, sometimes explicitly motivated by a desire to help improve content moderation, but published research using real content moderators is scarce. In this work we investigate the effect of explanations on the speed of real-world moderators. Our experiments show that while generic explanations do not affect their speed and are often ignored, structured explanations lower moderators' decision making time by 7.4%.
Original language | English |
---|---|
Title of host publication | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics |
Publisher | ACL Anthology |
DOIs | |
Publication status | Accepted/In press - 16 May 2024 |
Event | The 62nd Annual Meeting of the Association for Computational Linguistics - Centara Grand and Bangkok Convention Centre at CentralWorld, Bangkok, Thailand Duration: 11 Aug 2024 → 16 Aug 2024 Conference number: 62 https://2024.aclweb.org/ |
Conference
Conference | The 62nd Annual Meeting of the Association for Computational Linguistics |
---|---|
Abbreviated title | ACL 2024 |
Country/Territory | Thailand |
City | Bangkok |
Period | 11/08/24 → 16/08/24 |
Internet address |
Fingerprint
Dive into the research topics of 'Explainability and hate speech: Structured explanations make social media moderators faster'. Together they form a unique fingerprint.-
TEAMER : Teaching Machines to Reason Like Humans
Lapata, M. (Principal Investigator)
UK central government bodies/local authorities, health and hospital authorities
1/10/21 → 30/09/26
Project: Research
-
TransModal: Translating from Multiple Modalities into Text
Lapata, M. (Principal Investigator)
1/09/16 → 31/08/22
Project: Research