Fairness in AI and Its Long-Term Implications on Society

Ondrej Bohdal*, Timothy Hospedales, Philip H. S. Torr, Fazl Barez

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Successful deployment of artificial intelligence (AI) in various settings has led to numerous positive outcomes for individuals and society. However, AI systems have also been shown to harm parts of the population due to biased predictions. We take a closer look at AI fairness and analyse how lack of AI fairness can lead to deepening of biases over time and act as a social stressor. If the issues persist, it could have undesirable long-term implications on society, reinforced by interactions with other risks. We examine current strategies for improving AI fairness, assess their limitations in terms of real-world deployment, and explore potential paths forward to ensure we reap AI's benefits without harming significant parts of the society.
Original languageEnglish
Title of host publicationIntersections, Reinforcements, Cascades: Proceedings of the 2023 Stanford Existential Risks Conference.
PublisherStanford Existential Risks Initiative
Pages171-186
DOIs
Publication statusPublished - 18 Sept 2023
EventThird Annual Stanford Existential Risks Conference - Stanford University, Stanford, United States
Duration: 20 Apr 202322 Apr 2023
Conference number: 3
https://seri.stanford.edu/news/third-annual-stanford-existential-risks-conference-april-20-22-2023

Conference

ConferenceThird Annual Stanford Existential Risks Conference
Country/TerritoryUnited States
CityStanford
Period20/04/2322/04/23
Internet address

Fingerprint

Dive into the research topics of 'Fairness in AI and Its Long-Term Implications on Society'. Together they form a unique fingerprint.

Cite this