Potential Pitfalls with Automatic Sentiment Analysis: The Example of Queerphobic Bias

Eddie Ungless, Björn Ross, Vaishak Belle

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Automated sentiment analysis can help efficiently detect trends in patients’ moods, consumer preferences, political attitudes and more. Unfortunately, like many natural language processing techniques, sentiment analysis can show bias against marginalised groups. We illustrate this point by showing how six popular sentiment analysis tools respond to sentences about queer identities, expanding on existing work on gender, ethnicity and disability. We find evidence of bias against several marginalised queer identities, including in the two models from Google and Amazon that seem to have been subject to superficial debiasing. We conclude with guidance on selecting a sentiment analysis tool to minimise the risk of model bias skewing results.
Original languageEnglish
Pages (from-to)2211-2229
Number of pages19
JournalSocial science computer review
Issue number6
Early online date2 Feb 2023
Publication statusPublished - 1 Dec 2023

Keywords / Materials (for Non-textual outputs)

  • sentiment analysis
  • AI bias
  • natural language processing
  • queerphobia


Dive into the research topics of 'Potential Pitfalls with Automatic Sentiment Analysis: The Example of Queerphobic Bias'. Together they form a unique fingerprint.

Cite this