LLMs produce racist output when prompted in African American English

Su Lin Blodgett, Zeerak Talat

Research output: Contribution to journalComment/debatepeer-review

Abstract

Large language models (LLMs) are becoming less overtly racist, but respond negatively to text in African American English. Such ‘covert’ racism could harm speakers of this dialect when LLMs are used for decision-making.
Original languageEnglish
Pages (from-to)40-41
Number of pages2
JournalNature
Volume633
DOIs
Publication statusPublished - 28 Aug 2024

Keywords / Materials (for Non-textual outputs)

  • language
  • machine learning
  • society

Fingerprint

Dive into the research topics of 'LLMs produce racist output when prompted in African American English'. Together they form a unique fingerprint.

Cite this