Projects per year
Abstract / Description of output
The risks AI presents to society are broadly understood to be manageable through ‘general calculus’, i.e., general frameworks designed to enable those involved in the development of AI to apprehend and manage risk, such as AI impact assessments, ethical frameworks, emerging international standards, and regulations. This paper elaborates how risk is apprehended and managed by a regulator, developer and cyber-security expert. It reveals that risk and risk management is dependent on mundane situated practices not encapsulated in general calculus. Situated practice surfaces ‘iterable epistopics’, revealing how those involved in the development of AI know and subsequently respond to risk and uncover major challenges in their work. The ongoing discovery and elaboration of epistopics of risk in AI a) furnishes a potential program of interdisciplinary inquiry, b) provides AI developers with a means of apprehending risk, and c) informs the ongoing evolution of general calculus.
Original language | English |
---|---|
Pages (from-to) | 1-14 |
Number of pages | 14 |
Journal | AI and Society |
Early online date | 25 Jul 2024 |
DOIs | |
Publication status | E-pub ahead of print - 25 Jul 2024 |
Keywords / Materials (for Non-textual outputs)
- Artificial Intelligence (AI)
- trust
- risk
- ethnomethodology (EM)
- epistopics
Fingerprint
Dive into the research topics of 'AI and the iterable epistopics of risk'. Together they form a unique fingerprint.Projects
- 1 Finished
-
UKRI Trustworthy Autonomous Systems Node in Governance and Regulation
Ramamoorthy, R., Belle, V., Bundy, A., Jackson, P., Lascarides, A. & Rajan, A.
1/11/20 → 30/04/24
Project: Research