As the UK population grows, so does the strain on the NHS. This is where Artificial Intelligence (AI) can start to make major leaps forward for medical professionals, patients and the entire healthcare ecosystem. With the extensive medical datasets and the amount of man-power needed to aggregate, analyse and diagnose patients and their data, AI can prove a powerful tool in this process. The recent AI Fund of £250 million aims to do just this, through building systems to improve cancer screening, utilising DNA data sets, building predictive systems and automating routine admin tasks.

Richard Barker, CEO and Co-Founder of New Medicine Partners delves deeper into the ways that AI is currently improving medical decision-making today and the ways that we can we can ensure medical decision-making is developed efficiently and ethically in the years to come.

As biomedical science – and especially precision medicine – advances, the task of staying current is now beyond the capability of any doctor. If she is a GP, new diagnostic tests, treatments and guidelines will regularly appear for the hundreds of different common conditions that she sees. If she is a specialist in cancer for example, new sub-segments of disease are being discovered by mutational or phenotypic analysis, new drugs or cell therapies are emerging from laboratories and patients appear with unusual combinations of co-morbidities or pathologies she has never seen before, but she suspects others have. 

This information overload cries out for artificial intelligence (AI)-based decision support. Not to replace the clinician, but to ensure that the tests to run and the treatment options to consider are up to date, and to compare that unusual patient with those that other colleagues around the world may have seen and successfully treated. There are other applications of AI in medicine, which is a clear UK national priority. These include reading scans and optimising operations, but supporting the decisions of clinicians is among the most important. 

The need is clear but the road to meeting it is strewn with obstacles and challenges. The most prominent is access to patient level data that AI systems can learn on and interrogate. Concerns about confidentiality trouble patients and administrators, as we saw in two recent, well-intentioned projects by Google, in both the UK and the US. While there are well-developed processes for basic de-identification, there are also ways to re-identify patients, so increasingly health systems are looking for watertight guarantees, or restricting AI systems to operate within the institutional firewall. While patients themselves are largely supportive of their data being shared responsibly and used for research, healthcare providers are less so, too often seeing it as “their” data and a source of professional or institutional advantage. Ultimately there will be technologies to put access to data in the hands of the patients themselves, as anticipated under GDPR. For example, it is one of the advantages of distributed ledger technology (‘blockchain’) technology, which will be a major enabler for this.

A lesser, but still important challenge is how best to reflect clinical guidelines in what doctors see. These will include both national level and local versions or restrictions: for example, NICE (and Cancer Drugs Fund) guidelines in England, and local hospital or CCG guidelines controlling the local use of specific drugs. 

Inherent to AI systems, of course, is the so-called ‘black box’ problem. Doctors will suspect, and therefore tend to ignore, recommendations on the basis of which they cannot understand or query. This means that systems will need to be ready to explain the grounds for their outputs or at least to show how they change when different assumptions are made on the input factors.

Regulation is another challenge to overcome. Algorithm-based clinical decision support will typically be regarded by regulators as SaMDs (Software as a Medical Device). In Europe, this means approaching a diminishing number of Notified Bodies to achieve a CE mark. This may take some time, as the system struggles with implementing the new Medical Device Regulation (much tougher than the Directive that preceded it). In the US, the FDA has been squaring up to the challenge, with advice issued in 2013, 2014, 2015 and 2017. The most recently issued guidance seems quite progressive, recognising that a learning algorithm cannot be asked to be resubmitted every time it learns (!) but only when it enters a new domain. 

Last, but certainly by no means least, is the design of the clinical interface that embodies the AI system’s advice. Doctors already find most of the EMR systems with which they interact burdensome and complex. If an AI-based clinical decision support application is yet another piece of software with which they must grapple, it will go unused, whatever its merits, as doctors have very limited time and perhaps even more limited patience. Entering new passwords, clicking on multiple boxes and being asked irrelevant questions will all result in the best AI systems being overlooked. Interfaces must be simple, intuitive and clearly directly relevant to the specific decision that is the subject of the consultation.

Let us turn to the massive advantages of overcoming these barriers and applying AI to both historical Real World Data and Real Time patient data. Overall, the effect will be to democratise access to clinical knowledge, both to current best practice and to specific learnings from comparable individual cases in rare and complex conditions. More specifically, we will improve the accuracy of diagnosis based on a much larger range of relevant factors than customarily used by clinicians and incorporated in guidelines. It is said that doctors can easily take account of only 5 factors in reaching their conclusions: machine learning systems have no such restrictions.

Once the basic diagnosis is confirmed, AI approaches can identify sub-groups or cohorts of patients that have different outcomes (better or worse) than the ‘average’ outcomes demonstrated in randomised clinical trials (which of necessity have homogeneous groups of patients lacking the complicating factors found in many, if not most, real world patients). AI-powered clinical support systems can then assign new patients to such subgroups for optimum treatment choice, versus what standard guidelines might recommend, taking account of contra-indications and side-effect risks identified in the RWD analysis. 

AI systems may also be able to analyse time series more effectively than other approaches. This is particularly important in chronic conditions liable to progression or recurrence.

All the knowledge that can be mined by such systems has value for all the major stakeholders in healthcare. Patients can have greater assurance if they are being properly diagnosed and treated, and AI systems can be programmed to arrange  the personalised information about their case in terms that they can understand. Clinicians and the health systems in which they work can become members of an extended ‘clinical knowledge network’ sharing and receiving the insights that transcend any one practice or geographical area. Companies and laboratories with precision medicine tests will see the use of such tests grow, and biopharmaceutical companies will have a basis to ensure their expensive therapies are reaching the patients for which they have the greatest value and to track their outcomes.

It is appropriate to include some comments on the dangers of superficial use of RWD analysis and how to avoid them. The confusion between correlation and causation is one such danger. An oft-quoted example is of the strong correlation between Nobel Prizes per capita of a nation’s population and its per capita chocolate consumption. Before scientists reach for their chocolate bars it is worth considering whether there might be a confounding factor (such as national affluence) that could account for both! To avoid this danger, it is important to start with, or develop, plausible causative links – in the form of clinical hypotheses, which can then be tested. 

A second danger is of appearing to supplant the clinician. It seems likely that some routine tasks – like reading images – may be largely ‘outsourced’ to machines (of course, trained on millions of images read by human radiologists or pathologists). However, decisions about diagnosis, prognosis and treatment will stay in the hands of clinicians for the foreseeable future, with more or less augmentation of their intelligence from AI systems. 

A third danger is shared by all the uses of AI, which is that it will be held to standards well above those of the humans they augment. Any accident involving an autonomous vehicle hits the headlines and risks arousing opposition. Similarly, if an AI system contributes to a dangerous or fatal clinical error it can risk the development of the whole field, such is the understandable suspicion on the part of the general public. 

Any new technology shares the obstacles and potential dangers of the kind that AI for better clinical decision-making faces. However, the prize is so substantial that we must press ahead. Democratising clinical knowledge is essential, so that it is no longer the case that a patient facing a serious disease must agonise whether he is seeing the best clinician. Going to the best clinic is a goal that all of us can relate to and in matters medical, we all deserve the best. AI-powered clinical decision support is a major step towards that goal.

Prof Barker is the founder of Metadvice, a company working globally to build AI into better clinical decision-making. He is also chair of the Health Innovation Network, the South London AHSN.