Collaboration Across the Archival and Computational Sciences to Address Legacies of Gender Bias in Descriptive Metadata

Lucy Havens*, Beatrice Alex, Rachel Hosker, Benjamin Bach, Melissa Terras

*Corresponding author for this work

Research output: Contribution to conferenceAbstractpeer-review

Abstract / Description of output

This presentation reports on a case study investigating how Natural
Language Processing, a field that applies computational methods
such as Machine Learning to human-written texts, can support
the measurement and evaluation of gender biased language
in archival catalogs. Working with English descriptions from the
catalog metadata of the University of Edinburgh’s Archives, we
created an annotated dataset and classification models that identify
gender biases in the descriptions. Conducted with archival data,
the case study holds relevance across Galleries, Libraries, Archives,
and Museums (GLAM), particularly for institutions with
catalog descriptions in English. In addition to bringing Natural
Language Processing (NLP) methods to Archives, we identified
opportunities to bring Archival Science methods, such as Cultural
Humility (Tai, 2021) and Feminist Standpoint Appraisal (Caswell,
2022), to NLP. Through this two-way disciplinary exchange,
we demonstrate how Humanistic approaches to bias and uncertainty
can upend legacies of gender-based oppression that most
computational approaches to date uphold when working with data
at scale.
Original languageEnglish
Pages267-270
Publication statusPublished - 1 Jul 2023

Fingerprint

Dive into the research topics of 'Collaboration Across the Archival and Computational Sciences to Address Legacies of Gender Bias in Descriptive Metadata'. Together they form a unique fingerprint.

Cite this