Abstract / Description of output
We propose a bias-aware methodology to engage with power relations in natural language processing (NLP) research. NLP research rarely engages with bias in social contexts, limiting its ability to mitigate bias. While researchers have recommended actions, technical methods, and documentation practices, no methodology exists to integrate critical reflections on bias with technical NLP methods. In this paper, after an extensive and interdisciplinary literature review, we contribute a bias-aware methodology for NLP research. We also contribute a definition of biased text, a discussion of the implications of biased NLP systems, and a case study demonstrating how we are executing the bias-aware methodology in research on archival metadata descriptions.
Original language | English |
---|---|
Publication status | Accepted/In press - 9 Oct 2020 |
Event | 2nd Workshop on Gender Bias in Natural Language Processing at COLING 2020 - Duration: 13 Dec 2020 → 13 Dec 2020 |
Workshop
Workshop | 2nd Workshop on Gender Bias in Natural Language Processing at COLING 2020 |
---|---|
Period | 13/12/20 → 13/12/20 |