The healthcare ecosystem is changing, fast. The healthcare industry has always been an innovation leader, with the constant mutating of diseases making it paramount for the industry to always be ahead of the curve. A potential game changer for the future of healthcare is the increasing use of AI to diagnose and treat disease, upskill the workforce, and automate routine tasks. The recently announced AI Fund will provide £250 million to help explore and achieve this.

Many people believe AI has the potential to enable the augmentation and automation of roles across the healthcare spectrum. In this new blog, James Flint, CEO and Co-Founder of Hospify, who has been working closely with Kent Surrey Sussex Academic Health Science Network (KSS AHSN), delves deeper into what this really means.

While we praise AI for harnessing “Intelligence”, we still struggle to define what intelligence actually is. James explores the notion that true intelligence comes from us and AI is in fact a tool that can aid our progressions, leaving the redundancy of humans implausible.

The term “artificial intelligence” certainly gets attention. But do the technologies the term refers to really constitute intelligence? And whether we decide they do or not, what role should they be playing in healthcare? Should they augment the job of the clinician, or should they automate it wherever possible?

To answer this question let us think for a moment, not about computers, but about the humble dung beetle. This creature is far better able to navigate its environment than any machine humans have ever built. Yet we don’t think of it as particularly intelligent. Instead we think of it as instinctive, which is to say trained by the massively parallel deep-learning method known as evolution to do whatever it needs to do.

Once we realise this we start to see more clearly that the intelligence at work in any system of AI is still very much human intelligence and that AI can’t do very much of anything on its own – certainly not move around large balls of dung to help it survive and reproduce.

We (and maybe also dung beetles, despite being not that smart) augment ourselves with our technologies, and we automate aspects of that augmentation that we understand really well. But the intelligence comes from our brains and from our culture, not from the technology. And this has important repercussions for the way we think about deploying AI in health.

If we think that AI is actually, truly intelligent in any strong sense of the word, then it makes sense to expect that systems we train to detect cancers in PET scans with a higher success rate than your average radiologist will one day pretty soon replace radiologists. But if we think that AI is not smart like the dung beetle but is in fact a way of encoding a probabilistic response to data that’s more akin to an evolved instinct than an abstract thought, then we wouldn’t expect for a moment to make our radiologists redundant. On the contrary, we’d expect to get them to do more and better radiology, and for the discipline of radiology to become more effective than it had been hitherto.

As an area becomes well understood, it gets pushed further down the continuum from augmentation to automation, so setting up and reinforcing the virtuous cycle that is promised by technology. The machines – whether a stethoscope, an MRI scanner, or a deep learning algorithm – help us gather or interpret data more effectively than we’ve been able to before. This in turn allows us to better frame the issue and better come up with a treatment, and this treatment is itself a kind of automation as it compartmentalises a problem and solution in the form, to take a straightforward example, of a diagnosis and a course of medication.

There is no end to this process, not least because automated solutions have a notorious habit of creating new problems of their own. If we look again at the example of diagnosis and medication, we quickly see that our automated solutions – commonly known as pharmaceutical drugs – have created a whole new problem: that of determining the correct dosage of a drug to be administered to a given patient in a given condition or situation.

It’s not that we don’t have this knowledge – the appropriate dosages for a wide range of such circumstances are calculated as part of any drug trial. But communicating that information to the person administering the drug at the moment that they administer it – that is hard.

Years of training in medical school, libraries full of medical journals and literature, regularly published formularies and online pharmacopoeias… these don’t stop there being some 237m medication errors and avoidable adverse drug reactions In the UK every year, which cost the NHS £1.6bn and cause up to 22,000 deaths in England alone. And 71% of these medication errors take place in primary care, when the clinicians concerned are generally sitting in front of an internet-connected computer, with their formularies sitting close at hand on their shelves.

As it happens delivering information in response to natural language queries, whether written or spoken, is one of the things that AI is good at. An Alexa or Google Assistant for drug dosage information, however, is easier imagined than built.

The first issue is that queries for this kind of info are highly likely to contain patient identifiable information, and so standard cloud-based chatbot infrastructures are radically unsuitable for this kind of work. The second is that the knowledge graphs and databases that are drawn upon to answer dosage questions must be rigorously vetted and continually updated, as the relevant knowledge changes all the time.

AI must therefore be applied on both sides of the problem – the chat interface and the maintenance of the knowledge base – and deployed on an infrastructure that guarantees the privacy of both the question and the answer given. It’s non-trivial, even in this age of arguing with the Amazon Echo in our kitchen and summoning Siri on our phone.

At my company Hospify we believe this can be done using our own compliant chat infrastructure, and we have been working with partners Kent Surrey Sussex AHSN and Warwick University Department of Data Science to solve some of the challenges involved.

We remain convinced that this kind of AI, designed to help clinicians do their core job better with less mistakes, can have a bigger impact on patient outcomes than technologies which imagine they can somehow take clinicians out the loop.

If we really do want, at some point in the future, to automate some of the higher order functions that clinicians carry out, then the first step is to build some technologies that can help us apply the healthcare automations we already have in a manner that maximises their effectiveness. Only by doing that will we learn what really works – and so roll ourselves an even better ball of dung.