Projects per year
Abstract
To proactively offer social media users a safe online experience, there is a need for systems that can detect harmful posts and promptly alert platform moderators. In order to guarantee the enforcement of a consistent policy, moderators are provided with detailed guidelines. In contrast, most state-of-the-art models learn what abuse is from labeled examples and as a result base their predictions on spurious cues, such as the presence of group identifiers, which can be unreliable. In this work we introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone. We propose a machine-friendly representation of the policy that moderators wish to enforce, by breaking it down into a collection of intents and slots. We collect and annotate a dataset of 3,535 English posts with such slots, and show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions.
Original language | English |
---|---|
Pages (from-to) | 1440–1454 |
Number of pages | 15 |
Journal | Transactions of the Association for Computational Linguistics |
Volume | 10 |
DOIs | |
Publication status | Published - 23 Dec 2022 |
Fingerprint
Dive into the research topics of 'Explainable Abuse Detection as Intent Classification and Slot Filling'. Together they form a unique fingerprint.-
TEAMER : Teaching Machines to Reason Like Humans
UK central government bodies/local authorities, health and hospital authorities
1/10/21 → 30/09/26
Project: Research
-