This Prompt is Measuring < MASK > : Evaluating Bias Evaluation in Language Models

Seraphina Goldfarb-Tarrant*, Eddie Ungless*, Esma Balkir, Su Lin Blodgett

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Bias research in NLP seeks to analyse models for social biases, thus helping NLP practitioners uncover, measure, and mitigate social harms. We analyse the body of work that uses prompts and templates to assess bias in language models. We draw on a measurement modelling framework to create a taxonomy of attributes that capture what a bias test aims to measure and how that measurement is carried out. By applying this taxonomy to 90 bias tests, we illustrate qualitatively and quantitatively that core aspects of bias test conceptualisations and operationalisations are frequently unstated or ambiguous, carry implicit assumptions, or be mismatched. Our analysis illuminates the scope of possible bias types the field is able to measure, and reveals types that are as yet under-researched. We offer guidance to enable the community to explore a wider section of the possible bias space, and to better close the gap between desired outcomes and experimental design, both for bias and for evaluating language models more broadly.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: ACL 2023
Place of PublicationStroudsburg
PublisherAssociation for Computational Linguistics (ACL)
Pages2209-2225
ISBN (Print)9781959429623
Publication statusPublished - 9 Jul 2023
Event61st Annual Meeting of the Association for Computational Linguistics - Toronto, Canada
Duration: 9 Jul 202314 Jul 2023
Conference number: 61
https://2023.aclweb.org/

Conference

Conference61st Annual Meeting of the Association for Computational Linguistics
Abbreviated titleACL 2023
Country/TerritoryCanada
CityToronto
Period9/07/2314/07/23
Internet address

Fingerprint

Dive into the research topics of 'This Prompt is Measuring < MASK > : Evaluating Bias Evaluation in Language Models'. Together they form a unique fingerprint.

Cite this