Abstract / Description of output
Successful deployment of artificial intelligence (AI) in various settings has led to numerous positive outcomes for individuals and society. However, AI systems have also been shown to harm parts of the population due to biased predictions. We take a closer look at AI fairness and analyse how lack of AI fairness can lead to deepening of biases over time and act as a social stressor. If the issues persist, it could have undesirable long-term implications on society, reinforced by interactions with other risks. We examine current strategies for improving AI fairness, assess their limitations in terms of real-world deployment, and explore potential paths forward to ensure we reap AI's benefits without harming significant parts of the society.
Original language | English |
---|---|
Title of host publication | Intersections, Reinforcements, Cascades: Proceedings of the 2023 Stanford Existential Risks Conference. |
Publisher | Stanford Existential Risks Initiative |
Pages | 171-186 |
DOIs | |
Publication status | Published - 18 Sept 2023 |
Event | Third Annual Stanford Existential Risks Conference - Stanford University, Stanford, United States Duration: 20 Apr 2023 → 22 Apr 2023 Conference number: 3 https://seri.stanford.edu/news/third-annual-stanford-existential-risks-conference-april-20-22-2023 |
Conference
Conference | Third Annual Stanford Existential Risks Conference |
---|---|
Country/Territory | United States |
City | Stanford |
Period | 20/04/23 → 22/04/23 |
Internet address |