AI and the iterable epistopics of risk

Andy Crabtree, Glenn McGarry, Lachlan Urquhart

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

The risks AI presents to society are broadly understood to be manageable through ‘general calculus’, i.e., general frameworks designed to enable those involved in the development of AI to apprehend and manage risk, such as AI impact assessments, ethical frameworks, emerging international standards, and regulations. This paper elaborates how risk is apprehended and managed by a regulator, developer and cyber-security expert. It reveals that risk and risk management is dependent on mundane situated practices not encapsulated in general calculus. Situated practice surfaces ‘iterable epistopics’, revealing how those involved in the development of AI know and subsequently respond to risk and uncover major challenges in their work. The ongoing discovery and elaboration of epistopics of risk in AI a) furnishes a potential program of interdisciplinary inquiry, b) provides AI developers with a means of apprehending risk, and c) informs the ongoing evolution of general calculus.
Original languageEnglish
Pages (from-to)1-14
Number of pages14
JournalAI and Society
Early online date25 Jul 2024
DOIs
Publication statusE-pub ahead of print - 25 Jul 2024

Keywords / Materials (for Non-textual outputs)

  • Artificial Intelligence (AI)
  • trust
  • risk
  • ethnomethodology (EM)
  • epistopics

Fingerprint

Dive into the research topics of 'AI and the iterable epistopics of risk'. Together they form a unique fingerprint.

Cite this