Situated Data, Situated Systems: A Methodology to Engage with Power Relations in Natural Language Processing Research

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

We propose a bias-aware methodology to engage with power relations in natural language processing (NLP) research. NLP research rarely engages with bias in social contexts, limiting its ability to mitigate bias. While researchers have recommended actions, technical methods, and documentation practices, no methodology exists to integrate critical reflections on bias with technical NLP methods. In this paper, after an extensive and interdisciplinary literature review, we contribute a bias-aware methodology for NLP research. We also contribute a definition of biased text, a discussion of the implications of biased NLP systems, and a case study demonstrating how we are executing the bias-aware methodology in research on archival metadata descriptions.
Original languageEnglish
Publication statusAccepted/In press - 9 Oct 2020
Event2nd Workshop on Gender Bias in Natural Language Processing at COLING 2020 -
Duration: 13 Dec 202013 Dec 2020

Workshop

Workshop2nd Workshop on Gender Bias in Natural Language Processing at COLING 2020
Period13/12/2013/12/20

Fingerprint

Dive into the research topics of 'Situated Data, Situated Systems: A Methodology to Engage with Power Relations in Natural Language Processing Research'. Together they form a unique fingerprint.

Cite this