TY - GEN
T1 - Regulating AI/ML-enabled medical devices in the UK
AU - Li, Phoebe
AU - Williams, Robin
AU - Gilbert, Stephen
AU - Anderson, Stuart
PY - 2023/7/11
Y1 - 2023/7/11
N2 - The recent achievements of Artificial intelligence (AI) open up opportunities for new tools to assist medical diagnosis and care delivery. However, the optimal process for the development of AI is through repeated cycles of learning and implementation that may pose challenges to our existing system of regulating medical devices. Product developers face the tensions between the benefits of continuous improvement/deployment of algorithms and of keeping products unchanged to collect evidence for safety assurance processes. The challenge is how to balance potential benefits with the need to assure their safety. Governance and assurance requirements that can accommodate the live or near-live machine learning (ML) approach will be needed soon, as it is an approach likely to soon be of high importance in healthcare and in other fields of application. We have entered a phase of regulatory experimentation with various novel approaches emerging around the world. The process of social learning is not only about the application of AI but also about the institutional arrangements for its safe and dependable deployment, including regulatory experimentation, likely within sandboxes. This paper will reflect on the discussions from two recent Chatham House workshops on regulating AI in software as a medical device (SaMD), hosted by the UKRI/EPSRC project on 'Trustworthy Autonomous Systems: Regulation and Governance' node, with a special focus on the recent regulatory attempts in the UK and internationally.
AB - The recent achievements of Artificial intelligence (AI) open up opportunities for new tools to assist medical diagnosis and care delivery. However, the optimal process for the development of AI is through repeated cycles of learning and implementation that may pose challenges to our existing system of regulating medical devices. Product developers face the tensions between the benefits of continuous improvement/deployment of algorithms and of keeping products unchanged to collect evidence for safety assurance processes. The challenge is how to balance potential benefits with the need to assure their safety. Governance and assurance requirements that can accommodate the live or near-live machine learning (ML) approach will be needed soon, as it is an approach likely to soon be of high importance in healthcare and in other fields of application. We have entered a phase of regulatory experimentation with various novel approaches emerging around the world. The process of social learning is not only about the application of AI but also about the institutional arrangements for its safe and dependable deployment, including regulatory experimentation, likely within sandboxes. This paper will reflect on the discussions from two recent Chatham House workshops on regulating AI in software as a medical device (SaMD), hosted by the UKRI/EPSRC project on 'Trustworthy Autonomous Systems: Regulation and Governance' node, with a special focus on the recent regulatory attempts in the UK and internationally.
KW - Artificial Intelligence (AI)
KW - Artificial Intelligence-enabled Medical Device (AIeMD)
KW - autonomous systems
KW - regulation
KW - Software as a Medical Device (SaMD)
UR - http://www.scopus.com/inward/record.url?scp=85167998376&partnerID=8YFLogxK
U2 - 10.1145/3597512.3599704
DO - 10.1145/3597512.3599704
M3 - Conference contribution
AN - SCOPUS:85167998376
T3 - ACM International Conference Proceeding Series
SP - 1
EP - 10
BT - TAS '23
PB - Association for Computing Machinery (ACM)
T2 - 1st International Symposium on Trustworthy Autonomous Systems, TAS 2023
Y2 - 11 July 2023 through 12 July 2023
ER -