How Does AI Represent Social Concepts? Examining the Visual Representation of Care in Text-to-Image Tools

Melody Wang*, Nichole Fernandez, John Vines

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Text-to-image (T2I) generative AI tools like Midjourney are growing in capability and popularity, promising a wide range of applications. However, concerns are rising over the biases in how they represent social concepts like care and the lack of guidance for designers and users to address these in practice. This paper first presents an analysis of 140 “photos of care” generated by Midjourney, and then explores how prompting might influence the results. The findings reveal that AI-generated images reproduce stereotypical and reductive representations of care by default, neglecting the broad spectrums of care practices in everyday life. Furthermore, we find that while prompt engineering might mitigate certain biases, it requires specialised skills, knowledge, and an ongoing reflexive approach to generate meaningful outputs. We conclude by proposing a reflexive prompting framework, and discussing the implications for future T2I evaluation and its responsible use and design.
Original languageEnglish
Title of host publicationProceedings of ACM conference on Designing Interactive Systems 2025 (DIS '25)
EditorsNuno Jardim Nunes, Valentina Nisi, Ian Oakley, Qian Yang, Clement Zheng
Place of PublicationNew York, NY, United States
PublisherACM
Pages2770-2786
Number of pages17
ISBN (Electronic)9798400714856
DOIs
Publication statusPublished - 4 Jul 2025

Keywords / Materials (for Non-textual outputs)

  • Bias
  • Care
  • Visual Representation
  • Generative AI
  • Text-to-image Models
  • Prompt Engineering
  • Responsible AI

Fingerprint

Dive into the research topics of 'How Does AI Represent Social Concepts? Examining the Visual Representation of Care in Text-to-Image Tools'. Together they form a unique fingerprint.

Cite this