Recalibrating classifiers for interpretable abusive content detection

Bertie Vidgen, Sam Staton, Scott Hale, Ohad Kammar, Helen Margetts, Tom Melham, Marcin Szymczak

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We investigate the use of machine learning classifiers for detecting online abuse in empirical research. We show that uncalibrated clas- sifiers (i.e. where the ‘raw’ scores are used) align poorly with human evaluations. This limits their use for understanding the dynamics, patterns and prevalence of online abuse. We examine two widely used classifiers (created by Perspective and Davidson et al.) on a dataset of tweets directed against candidates in the UK’s 2017 general election. A Bayesian approach is presented to recalibrate the raw scores from the classifiers, using probabilistic programming and newly annotated data. We argue that interpretability evaluation and recalibration is integral to the application of abusive content classifiers.
Original languageEnglish
Title of host publicationProceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science
EditorsDavid Bamman, Dirk Hovy, David Jurgens, Brendan O'Connor, Svitlana Volkova
PublisherACL Anthology
Pages132-138
Number of pages7
ISBN (Print)978-1-952148-80-4
DOIs
Publication statusPublished - 20 Nov 2020
EventFourth Workshop on Natural Language Processing and Computational Social Science 2020 - Online
Duration: 20 Nov 202020 Nov 2020
Conference number: 4
https://sites.google.com/site/nlpandcss/previous-editions/nlp-css-at-emnlp-2020

Workshop

WorkshopFourth Workshop on Natural Language Processing and Computational Social Science 2020
Abbreviated titleNLP+CSS 2020
Period20/11/2020/11/20
Internet address

Fingerprint

Dive into the research topics of 'Recalibrating classifiers for interpretable abusive content detection'. Together they form a unique fingerprint.

Cite this