Artificial intelligence has incredible potential to improve healthcare, but it still comes with many uncertainties. An HSJ webinar, in association with Amazon Web Services, brought together three experts to discuss the issues.

Depositphotos_179277316_XL

In association withaws

The public has become more aware of AI in the past few years thanks to widely accessible tools like ChatGPT. This accessibility is driving adoption in the healthcare sector, suggested Health Economics Unit director Andi Orlowski.

The list of AI’s uses is already long and is growing daily. Chatbots, for example, can be used for tasks from managing patient referrals to handling internal IT queries, while other AI tools can help practitioners make sense of huge volumes of text by summarising and analysing it. Meanwhile, Mr Orlowski’s teams use it to help build models for the analysis of large data sets.

“The vast majority of data analysts these days, rather than poring through Google to work out how to program R, Python, or Excel, just ask Chat[GPT],” he said. “And Chat will tell you exactly how to do these things.”

James Kinross, a reader in colorectal surgery and consultant surgeon at Imperial College London, has been integrating AI into his practice for several years now. He helped develop a system that uses AI to analyse surgical video to help with training and was also involved in a project that used AI to surface actionable insight from large, complex data sets about covid.

“At one stage at Imperial we had the biggest data set, the biggest corpus of knowledge on covid anywhere in the world,” he said. “What we found was that we could turn that into text readable information and begin to use deep learning to derive insights from it quite quickly.”

The success of this led to funding that broadened the scope to other areas, and he now works with partners to develop tools that frontline clinicians can use to help make better decisions for patients.

Clearly, the sector is far beyond the pilot phase of AI and anyone involved in healthcare should be asking what the impact could be on their domain.

“We’re seeing organisations of all sizes who really want to get started with generative AI,” said Amazon Web Services’ head of healthcare data science Matthew Howard. “But critically, they also want to translate the momentum out of betas and prototypes and demos… into real-world innovation and productivity gains.”

Dr Howard, who works with healthcare organisations worldwide, sees increased interest in a choice of generative AI models, allowing researchers to pick the toolsets that best fit their needs. He gave the example of Genomics England, which has built an AI-powered literature review tool using AWS technologies. This, he said, is “effectively augmenting the ability of… teams to scan through a very large corpus of literature and then potentially identify clinically relevant associations.”

Indeed, one of AI’s biggest purported benefits is its ability to close workforce gaps in the NHS by doing some of the work that people are currently struggling with. Dr Howard sees a particular opportunity for AI to pick up some of the “undifferentiated heavy lifting off our clinical community,” allowing doctors, nurses, and researchers to spend more time on matters needing human oversight.

So where should people start with an AI project? The panel all agreed a “working backwards” approach is required, with AI brought in to solve a clearly defined problem rather than used for its own sake. The teams behind them must be multidisciplinary, requiring digital stakeholders, but also clinical and business leaders.

Mr Kinross has first-hand experience in implementing AI tools and the cultural challenges they can pose.

“A lot of these technologies are really profoundly disruptive, and they require clinicians to perhaps reconceptualize the way that they actually perform their job,” he said. This challenge is compounded further when practitioners do not fully understand what generative AI is and how it works.

Members of the webinar audience were keen to know how the AI-curious can reassure themselves a tool is safe to use, and in what circumstances. The Medicines and Healthcare Products Regulatory Agency now lets manufacturers classify software (including AI) as medical devices, meaning these products undergo similar scrutiny.

“All of these AI tools – everything that is potentially patient-facing or will affect how patients are treated – have a rigorous and robust sign-off process all the way through the MHRA,” said Mr Orlowski. “So, the same way that you would expect our pharmaceutical companies to run through their products, there is a clear pathway for digital tools.”

Mr Orlowski pointed out that mass-market tools should not be used to process critical patient data or make clinical decisions, but they can be used to speed up day-to-day admin processes like summarising internal meeting notes. And, if they are used to help write code, it is the software built with this code that would be subject to regulatory testing and approval.

AI’s capabilities are not in question, and there is no shortage of people eager to use it. However, the panel agreed that trust is crucial to the future success of AI in healthcare. Mass adoption can only happen if clinicians and system leaders trust the tools they will be working with, and frameworks must be in place to reassure patients that their care and data are safe in the hands of AI.

An on-demand version of this webinar is available.

To access the recording, visit here and click play.  

If you have previously registered for the event, clicking the link will immediately bring you to the recording. Simply press play.

If you have not previously registered, complete the form that appears. Youll then be able to immediately watch the recording.