- NHSX publishes report outlining plans for artificial intelligence in healthcare
- Research reveals there is a “50/50 split” between developers who sought ethical approval for new systems and those who did not
- New national AI lab will aim to overcome “regulatory painpoints” in the development pathways
Half of developers are not seeking ethical approval before they start producing new artificial intelligence systems for healthcare, according to a report by NHSX.
The report, Artificial Intelligence: How to get it right, notes that the “complex governance framework” around AI tech could limit innovation and potentially compromise patient safety.
It also revealed that there is “an almost 50/50 split” between developers who sought ethical approval before they started the development process for a new AI system and those who did not.
NHSX is a joint agency across NHS England and Improvement and the Department of Health and Social Care, set up by health and social care secretary Matt Hancock earlier this year.
Its new report states: “The current complex governance framework for AI technologies is perhaps limiting innovation and potentially risking patient safety.
“The survey results also reveal that it is currently quite hit and miss whether or not developers seek ethical approval at the beginning of the development process with an almost 50/50 split between those that did and those that did not.
“This is in part due to a lack of awareness: almost a third of respondents said they were either not developing in line with the [DHSC] code of conduct or were not sure.
“The main reason given for this was ‘I was unaware that it existed.’ We (NHSX) will need to ensure that in all funding applications the expectation of compliance is made clear.” The code, called Data-Driven Health and Care Technology, was published by the DHSC in September 2018 and revised in February this year, as guidance for those involved in data-driven tech in the NHS.
The research, published today, also outlines the role of NHSX’s £250m AI lab announced by the DHSC in August.
It notes that the lab will aim to overcome “regulatory pain points” in the development pathway for AI products, including lack of coordination between regulators and statutory bodies.
It is intended to encourage the spread of “good” innovation, develop a governance framework and produce a “how to” guide for AI developers. The guide will aim to provide developers with information on what is expected from them when creating AI products for use in healthcare and the processes involved.
The report states: “The intention is to make it very clear to developers not only what is expected of them in order to develop AI for use in health and care, but also how they might go about doing it.
“This is because ethical and behavioural principles are necessary but not sufficient to ensure the design and practical implementation of responsible AI.
“The ultimate aim is to build transparency and trust.”
The report also highlighted ongoing AI work in healthcare – including the East Midlands Radiology Consortium breast cancer screening programme.
AI uses computer systems to carry out tasks or solve problems that usually require human intelligence. In 2018, EMRAD teamed up with two UK AI companies, Faculty and Kheiron Medical Technologies to use AI tools to carry out second mammogram readings during breast cancer screenings in order to reduce staffing pressures.
Google Deep Mind is also trying to use AI to predict and prevent the onset of life-threatening conditions in NHS hospitals. In the US, DeepMind has used a deep learning approach on a large number of health records to show it can predict the onset of acute kidney injury – a common and serious condition often developed in hospital – up to 48 hours in advance.
The NHSX research found that 20 developers are confident their AI technology will be ready to be implemented in the health system in one year, while 70 believe their products will be available in five years.
However, only half of AI products available globally have market authorisation.
NHSX chief digital officer Tara Donnelly said: “Artificial Intelligence has already shown it has the potential to not only improve but save lives across the health and care system which is why it is a priority in the NHS long-term plan.
“From processing scan results to automating language, this is an exciting era for technology in healthcare and the introduction of our new NHS AI Lab will help revolutionise the UK’s ability to take advantage of this technology, benefiting patients and clinical staff alike.”
Source
NHSX report
Source Date
October 2019
5 Readers' comments