"I wouldn't say offensive but⋯": Disability-Centered Perspectives on Large Language Models

Vinitha Gadiraju, Shaun Kane, Sunipa Dev, Alex Taylor, Ding Wang, Emily Denton, Robin Brewer

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Large language models (LLMs) trained on real-world data can inadvertently reflect harmful societal biases, particularly toward historically marginalized communities. While previous work has primarily focused on harms related to age and race, emerging research has shown that biases toward disabled communities exist. This study extends prior work exploring the existence of harms by identifying categories of LLM-perpetuated harms toward the disability community. We conducted 19 focus groups, during which 56 participants with disabilities probed a dialog model about disability and discussed and annotated its responses. Participants rarely characterized model outputs as blatantly offensive or toxic. Instead, participants used nuanced language to detail how the dialog model mirrored subtle yet harmful stereotypes they encountered in their lives and dominant media, e.g., inspiration porn and able-bodied saviors. Participants often implicated training data as a cause for these stereotypes and recommended training the model on diverse identities from disability-positive resources. Our discussion further explores representative data strategies to mitigate harm related to different communities through annotation co-design with ML researchers and developers.

Original languageEnglish
Title of host publicationProceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
PublisherAssociation for Computing Machinery
Pages205-216
Number of pages12
ISBN (Electronic)9781450372527
DOIs
Publication statusPublished - 12 Jun 2023
Event6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023 - Chicago, United States
Duration: 12 Jun 202315 Jun 2023

Publication series

NameACM International Conference Proceeding Series

Conference

Conference6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
Country/TerritoryUnited States
CityChicago
Period12/06/2315/06/23

Keywords / Materials (for Non-textual outputs)

  • algorithmic harms
  • artificial intelligence
  • chatbot
  • data annotation
  • dialog model
  • disability representation
  • large language models
  • qualitative

Fingerprint

Dive into the research topics of '"I wouldn't say offensive but⋯": Disability-Centered Perspectives on Large Language Models'. Together they form a unique fingerprint.

Cite this