Projects per year
Abstract / Description of output
We consider the anticipated adoption of Artificial Intelligence (AI) in medical diagnosis. We examine how seemingly compelling claims are tested as AI tools move into real-world settings and discuss how analysts can develop effective understandings in novel and rapidly changing settings.
Four case studies highlight the challenges of utilising diagnostic AI tools at differing stages in their innovation journey. Two ‘upstream’ cases seeking to demonstrate the practical applicability of AI and two ‘downstream’ cases focusing on the roll out and scaling of more established applications.
We observed an unfolding uncoordinated process of social learning capturing two key moments: i) experiments to create and establish the clinical potential of AI tools; and, ii) attempts to verify their dependability in clinical settings while extending their scale and scope. Health professionals critically appraise tool performance, relying on them selectively where their results can be demonstrably trusted, in a de facto model of responsible use. We note a shift from procuring stand-alone solutions to deploying suites of AI tools through platforms to facilitate adoption and reduce the costs of procurement, implementation and evaluation which impede the viability of stand-alone solutions.
New conceptual frameworks and methodological strategies are needed to address the rapid evolution of AI tools as they move from research settings and are deployed in real-world care across multiple settings. We observe how, in this process of deployment, AI tools become ‘domesticated’. We propose longitudinal and multisite `biographical’ investigations of medical AI rather than snapshot studies of emerging technologies that fail to capture change and variation in performance across contexts.
Four case studies highlight the challenges of utilising diagnostic AI tools at differing stages in their innovation journey. Two ‘upstream’ cases seeking to demonstrate the practical applicability of AI and two ‘downstream’ cases focusing on the roll out and scaling of more established applications.
We observed an unfolding uncoordinated process of social learning capturing two key moments: i) experiments to create and establish the clinical potential of AI tools; and, ii) attempts to verify their dependability in clinical settings while extending their scale and scope. Health professionals critically appraise tool performance, relying on them selectively where their results can be demonstrably trusted, in a de facto model of responsible use. We note a shift from procuring stand-alone solutions to deploying suites of AI tools through platforms to facilitate adoption and reduce the costs of procurement, implementation and evaluation which impede the viability of stand-alone solutions.
New conceptual frameworks and methodological strategies are needed to address the rapid evolution of AI tools as they move from research settings and are deployed in real-world care across multiple settings. We observe how, in this process of deployment, AI tools become ‘domesticated’. We propose longitudinal and multisite `biographical’ investigations of medical AI rather than snapshot studies of emerging technologies that fail to capture change and variation in performance across contexts.
Original language | English |
---|---|
Article number | 102469 |
Pages (from-to) | 1-14 |
Number of pages | 14 |
Journal | Technology in Society |
Volume | 76 |
Early online date | 26 Jan 2024 |
DOIs | |
Publication status | Published - Mar 2024 |
Keywords / Materials (for Non-textual outputs)
- domestication
- social learning
- artificial intelligence
- machine learning
- medicine
- health
- diagnosis
Fingerprint
Dive into the research topics of 'Domesticating AI in medical diagnosis'. Together they form a unique fingerprint.Projects
- 1 Finished
-
UKRI Trustworthy Autonomous Systems Node in Governance and Regulation
Ramamoorthy, R., Belle, V., Bundy, A., Jackson, P., Lascarides, A. & Rajan, A.
1/11/20 → 30/04/24
Project: Research