Explainable Abuse Detection as Intent Classification and Slot Filling

Agostina Calabrese, Björn Ross, Maria Mirella Lapata

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

To proactively offer social media users a safe online experience, there is a need for systems that can detect harmful posts and promptly alert platform moderators. In order to guarantee the enforcement of a consistent policy, moderators are provided with detailed guidelines. In contrast, most state-of-the-art models learn what abuse is from labeled examples and as a result base their predictions on spurious cues, such as the presence of group identifiers, which can be unreliable. In this work we introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone. We propose a machine-friendly representation of the policy that moderators wish to enforce, by breaking it down into a collection of intents and slots. We collect and annotate a dataset of 3,535 English posts with such slots, and show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions.
Original languageEnglish
Pages (from-to)1440–1454
Number of pages15
JournalTransactions of the Association for Computational Linguistics
Volume10
DOIs
Publication statusPublished - 23 Dec 2022

Fingerprint

Dive into the research topics of 'Explainable Abuse Detection as Intent Classification and Slot Filling'. Together they form a unique fingerprint.

Cite this