BETTER DIASNOSES. Personalised support for patients. Faster drug discovery. Greater efficiency. Artificial intelligence (AI) is generating excitement and hyperbole everywhere, but in the field of health care it has the potential to be transformational. In Europe analysts predict that deploying AI could save hundreds of thousands of lives each year; in America, they say, it could also save money, shaving $200bn-360bn from overall annual medical spending, now $4.5trn a year (or17% of GDP). From smart stethoscopes and robot surgeons to the analysis pf large data sets or the ability to chat to a medical AI with a human face, opportunities abound.
There is already evidence that AI systems can enhance diagnostic accuracy and disease tracking, improve the prediction of patients’ outcomes and suggest better treatments. It can also boost efficiency in hospitals and surgeries by taking on tasks such as medical transcription and monitoring patients, and by streamlining administration. It may already be speeding the time it takes for new drugs to reach clinical trials. New tools, including generative AI, could supercharge these abilities. Yet as our Technology Quarterly this week shows, although AI has been used in health care for many years, integration has been slow and the results have often been mediocre. There are good and bad reasons for this. The good reasons are that health care demands high evidentiary barriers when introducing new tools, to protect patients’ safety. The bad reasons involve data, regulation and incentives. Overcoming them could hold lessons for AI in other fields. AI systems learn by processing huge volumes of data, something health-care providers have in abundance. But health data is highly fragmented; strict rules control its use. Governments recognise that patients want their medical privacy protected. But patients also want better and more personalised care. Each year roughly 800,000 Americans suffer from poor medical decision-making.
Improving accuracy and reducing bias in AI tools requires them to be trained on large data sets that reflect patients’ full diversity. Finding secure ways to allow health data to move more freely would help. But it could benefit patients, too: they should be given the right to access their own records in a portable, digital format. Consumer-health firms are already making use of data from wearables, with varying success. Portable patients’ records would let people make fuller use of their data and take more responsibility for their health.
Another problem is managing and regulating these innovations. In many countries the governance of AI in health, as in other areas, is struggling to keep up with the rapid pace of innovation. Regulatory authorities may be slow to approve new AI tools or may lack capacity and expertise. Governments need to equip regulators to assess new AI tools. They also need to fill regulatory gaps in the surveillance of adverse events, and in the continuous monitoring of algorithms to ensure they remain accurate, safe, effective and transparent. That will be hard. One solution would be for countries to work together, to learn from each other and create minimum global standards. A less complex, international regulatory system would also help create a market in which small companies can innovate. Poorer countries, with less developed health infrastructure, have much to gain from introducing new tools, such as an AI-powered portable ultrasound device for obstetrics. Because the alternative to an AI tool is often no treatment at all, they may even be able to leapfrog the entrenched health systems of rich countries – though a lack of data, connectivity and computing power will get in the way.
A final problem involves institutions and incentives. AI promised to cut medical costs by assisting or replacing workers, improving productivity, reducing errors and flattening or reducing spending, all while improving care. That is desperately needed. The world could lack 10m health-care workers by 2030, around 15% of today’s workforce. And administration accounted for about 30% of America’s excess health-care costs, compared with other countries, in 2022.
Yet saving money using innovation is tricky. Health systems are set up to use it to improve care, not cut costs. New technology may account for as much as half of the annual growth in health spending. Layering on new systems will increase costs and complexity. But redesigning processes to make efficient use of AI is likely to be resisted by patients and medics. Though AI may be able to triage them over the phone or provide routine results, patients may demand to be seen in person.
Worse, many health systems, such as America’s, are set up to reward the volume of work. They have little reason to adopt technologies that cut the number of visits, tests or procedures. And even publicly run health-care systems may lack incentives to adopt technologies that reduce costs rather than improve outcomes, perhaps because saving money may lead to a smaller budget next year. Unless governments can change these incentives, so that AI combines better treatment with new efficiencies, innovation will increase costs. Accordingly, governments and health authorities will need to fund schemes dedicate to testing and deploying new AI technologies. Countries including America, Britain and Canada are pointing the way.
AI, MD
Much of the burden for boosting AI in health care falls on governments and regulators. However, companies have a part to play, too. Insurers have already used AI tools to deny care unfairly; firms have mis-sold or overstated the abilities of health AI; algorithms have made mistakes. Firms have a duty to ensure that their products are safe, reliable and accountable, and that humans, however flawed, remain in control.
These obstacles are formidable but the potential benefits of using AI in health care are so vast that the case for overcoming them should be obvious. And if AI can be made to work in medicine, it could provide a prescription for the adoption of technology in other fields.