As part of my collaboration with HIMSS Europe, last week I had the pleasure of interviewing Dr. Dominic King (@Dominic1King), the Health Lead of DeepMind, an artificial intelligence company based in London that was acquired by Google in 2014 for an estimated £400m. King, a former NHS surgeon, talks about the work he and his team are doing with the NHS and about the challenges of integrating AI in current healthcare systems.
What are your main tasks at DeepMind?
I oversee the health projects, making sure that our work is driven by input from doctors, nurses and patients, and backed by robust clinical evidence. I get to work with an incredible hybrid team of doctors, engineers, and security experts, all working towards improving healthcare with technology.
In which healthcare areas is DeepMind working right now?
We have two separate strands of work: the immediate development and deployment of our mobile app Streams (which doesn’t yet include any AI), and a set of longer-term AI research projects (which use depersonalised data). Our AI research is primarily focused on medical imaging, looking at the segmentation and classification of specific elements in a medical scan, and making clinical predictions from electronic health record.
An AI system developed by DeepMind showed recently that is capable of diagnosing more than 50 types of eye disease as accurately as world leading experts. Is AI going to be the main diagnostic tool in the near future?
At the moment the health efforts in AI that are most likely to lead to use in clinical practice – such as using deep learning to analyse medical images much more efficiently than current techniques allow – will augment, not replace, an expert human’s clinical judgement. That is why our work with Moorfields Eye Hospital, to build an algorithm that can detect eye disease and prioritise patients most in need of care, was so important – addressing the ‘black box’ problem of algorithms by providing a visual representation and percentage recommendation to tell doctors how it made its decision. This kind of explainability is important if AI is to be widely used in clinical practice. But final responsibility for diagnosis and treatment should continue to rest with the clinician, as it would with any healthcare process involving technology or otherwise.
How can you involve AI in healthcare without losing the patient perspective?
We believe that outcomes are better when patients and doctors work together, and we are committed to bringing the patient voice into all of our projects. We have already benefited from the thousands of hours of patient engagement run by our NHS partners. We are excited to put all the feedback we have gathered into practice, and have committed to extensive engagement in the months to come, including growing an online patient community to engage with us in real-time, hosting design sessions with patients, and bringing in patients to collaborate in our research partnerships.
One of the challenges of AI implementation is the integration of these systems in the clinicians and nurses everyday work. How do you think it can be achieved?
Over the course of hundreds of hours of shadowing, interviews and workshops with nurses, doctors and patients, we have learned a lot about some of the problems they face and the systems they work with. A key theme from these discussions was not the pressing need for AI solutions, but rather the need for interoperable systems that connect with one another, and make the transmission of health information easier. That is why we developed Streams, our secure mobile app that doesn’t currently use AI, to pull together information from different systems into one place.
Can you tell us more about Streams?
The first version of Streams has been in use at the Royal Free London NHS Trust since 2017 and we are delighted that the early feedback from nurses, doctors and patients has so far been really positive. Some of the nursing staff using Streams have estimated that it has been saving them up to two hours every day, which means they can spend more time face-to-face with patients. We will also be releasing findings from our peer-reviewed service evaluation soon. We have also signed Streams partnerships with Taunton and Somerset NHS Foundation Trust, Yeovil District Hospital NHS Foundation Trust and Imperial College Healthcare NHS Trust.
What would you say to the critics that have undermined the use of AI by the NHS?
AI could ultimately help nurses and doctors analyse information much more quickly and effectively, which could allow them to give the best treatment much sooner, and ultimately, spend more time with patients in their care. But we can only reap the benefits AI offers if clinicians and technologists work together to design solutions that actually work for them.
The Royal Free Hospital in London was the target of a great deal of criticism in 2017 for sharing 1,6 million patient data records with DeepMind. How is your company working to avoid similar situations?
We welcome scrutiny because it makes our work better. The ICO’s findings were about the Royal Free, as the data controller, not DeepMind, but they certainly prompted us to reflect on our own actions. Since then, we have made major improvements to transparency, oversight and engagement – from hosting patient and public engagement events in London and Manchester, to working with Ipsos MORI and others to speak to stakeholders about what they think companies like us can, and should, be doing in healthcare.