The King's Fund Digital Health and Care Conference

DATE
July 22, 2025
SHARE

At the 2025 Kings Fund Digital Health and Care Conference, there was plenty of discussion around health tech, and of course AI.

It may seem like every angle on AI has been explored, with buzzwords like potential and challenges cropping up time and time again. But this year’s event told a different story. Entirely new themes emerged, themes that have, until now, remained buried beneath the excitement.

AI is in no-doubt pivotal for the UK’s future in healthcare, and that's true across the world more broadly. But now experts are facing the next hurdle: deployment.

It’s tempting to think, “We’ve got the technology, so why not just use it?” Making AI work in the real world, however, is far more difficult than it first appears.

Kings Fund Conference
Against the backdrop of the Kings Fund’s historic sandstone building in central London, the 2025 Digital Health and Care Conference brought together the UK’s leading tech and health experts. Source: The King's Fund website.

Digital Exclusion Is a Bigger Problem Than We Thought

With the UK government charging ahead with a radical analogue-to-digital transformation of the NHS, as outlined by the 10-year plan, concerns are rising for those stuck in the analogue phase — a key theme of this year's conference.

Not everyone has access to the same technologies; regional disparities, while they’ve improved over time, are still very much present.

“When I go around the country, there's a huge amount of variation in the capacity and capability for organizations to adopt new technologies. We have to do something about that,” said Dr Vin Diwakar, Medical Director for Secondary Care and Transformation at NHS England and Improvement.

Arguably, the plan places AI at the heart of NHS digitization. But, when some areas are equipped for AI while others lag behind, healthcare standards fracture.

The affluent regions suddenly have access to better quality of care — and, by extension, a better standard of living — while those that are less fortunate struggle to keep up. The UK becomes an advocate for the postcode lottery.

The Thorny Edge of Digital Literacy

And even if everyone could access the same technology, they might not know how to use it.

Around 10 million people in the UK lack foundation digital skills (basic tasks like using a search engine, sending an email, and spotting phishing attempts), including quite a large section of the NHS, according to Diwakar. But many of those affected fall in the 65 years or older category — the ones who are most likely to rely on healthcare services.

Digitizing the UK then, ironically, risks leaving the people who would benefit the most from better healthcare services in the dark. They become digitally excluded.

Dr Vin Diwakar speaking
Dr Vin Diwakar spoke about some of the biggest issues facing the UK as AI transitions from "bench to bedside."

But there are solutions. Diwakar referred to national proxy services — giving family members the ability to make health decisions on a person’s behalf — and the Digital Accessibility Centre in Leeds, aiming to design more user-friendly technology with the help of ~14,000 participants.

Both he, and Dr Malte Gerhold, Director of Innovation and Improvement at the Health Foundation, also pointed to the role of financial incentives. Laid out by the NHS 10-year plan, local centers will be encouraged to deploy digital tools for rewards, where things like reduced hospital admissions earn extra funding.

Whether these initiatives are enough to clamp down on the issue, however, remains to be seen.

“What we can do from the center will only ever do so much. No incentive will ever be strong enough if the [local organizations] don’t feel the pull to change. We need to build motivation around it.” — Dr Malte Gerhold

Walking the AI Tightrope Has Been Done Before, But Can We Do It Again?

Digital exclusion aside, getting AI from bench to bedside requires a perfect balance between speed and safety. A balance that, seemingly, UK stroke units have mastered.

In 2019, 5% of stroke units in the UK were using AI diagnostic tools. By late 2024, that number hit 100% — a monumental jump in just six years. Vin described this success as a playbook of how AI can be quickly and safely deployed.

The first step, he said, is taking a top-down approach: collecting the evidence early-on that shows why the AI solution is needed in the first place. This hinges on hard facts: the number of patients experiencing severe delays in their treatment, for instance.

Next, he praised the structural support from a national procurement framework: umbrella guidelines that lets health centers buy and use technologies faster. For example, the framework might list approved suppliers with AI tools that meet regulatory standards; a regional health center can then easily acquire tools from said list without having to carry out their own testing.

Finally, Diwakar stressed that deployment should be carefully encased in a training “sandwich” — by that, he means support from both the top, the national clinical director for stroke, in this case, and from the bottom, i.e., peer-to-peer learning.

“While the pace of adoption is really important, you don't want an unsafe pace of adoption,” he said, re-iterating the delicate balance between speed and safety as AI begins to unroll across the UK. Ticking off these three checkboxes can help strike that balance.

The NHS Can't Do it Alone

But Diwakar also suggested that the NHS can’t do it alone. To truly get healthcare to where it should be, we need cross-sector collaboration.

The investment competition, with different sectors each chasing the biggest rewards, is no longer a sustainable model; enter the “Age of Big Data,” where data is at the heart of every insight imaginable. And for such data to be useful, the government must work with industry, and vice-versa.

He drew on the recent groundbreaking partnership between Manchester Health Innovation and major pharmaceutical company Eli Lilly, which aims to collect real-world evidence on the effects of tirzepatide — a GLP-1 analogue — over the course of five years.

Eli Lilly will be monitoring the long-term health impacts, looking at things like the prevalence of obesity-related complications; as so many clinical trials have done before.

But, unlike previous trials, government involvement offers a unique window into the societal impacts of GLP-1 use, examining aspects like healthcare resource utilization, changes in employment status, and quality of life. These more collaborative initiatives can piece together a bigger picture of the GLP-1 story, beyond just what's going on in the body.

The AI Revolution Is Failing the People Who Matter Most

It's clear that momentum is building around health digitization, especially for AI. However, as the conference neared the end, Professor Susan Shelmerdine, NHS clinical entrepreneur, Roentgen Professor at the Royal College of Radiologists, spotlighted a glaring technological blind spot — one that is welded into the AI ecosystem.

“Despite all the hype and investment, AI healthcare tools don’t work for one in five of the population: children. Our most vulnerable people,” said Shelmerdine.

A study, published in JAMA earlier this year, underscores the brevity of the problem. Of 880 FDA-approved AI medical devices, 150 were labelled as safe for children. So far so good, Shelmerdine said.

But half of these devices don’t specify whether they’re suitable for children, forcing hospitals to make assumptions about their safety in pediatric care — it’s a speculative gray area that healthcare cannot afford to dip its toes in.

Of the 150 that were safe for children, over 20% hadn’t actually been trained on pediatric data — not an insignificant amount. Worse still, 66% of them had been trained on data of “unknown” origins.

One Size Doesn't Fit All When it Comes to AI

And even if they had been trained on pediatric data, Shelmerdine makes a very valid point: “children” is an umbrella term covering many different stages of life. A newborn is very different to a school-aged child, which — in turn — is very different to a teenager.

“When you develop an AI tool in one population, and then you apply it to another population that it’s not meant to be used in, unfortunate incidents happen. We’ve seen this happen in radiology with children,” said Shelmerdine.

The risks that come with this problem could be catastrophic. Shelmerdine drew on a case from the American College of Radiologists, where an AI triage tool prioritized patients with brain bleeds on their CT scans so that they could be seen and treated first. But they weren’t aware this tool didn’t work in children.

Because of this unawareness, a child with a brain bleed was de-prioritized and received delayed treatment. This isn’t the common cold; if not treated immediately, the consequences are potentially fatal.

Shelmerdine, however, said that there are pockets of progress and hope.

She mentioned how the Centre for Excellence in Regulatory Science is trying to set up a youth council, specifically handling AI regulation in these younger age groups. Alongside this, the Children’s Hospital Alliance — a network of children’s hospitals and expert pediatricians — are bringing the best minds together to decide how we enact change, said Shelmerdine.

Professor Susan Shelmerdine
Professor Susan Shelmerdine addressing the difficulties with using AI in pediatric care.

For now, though, AI and children remain on opposite sides of the safety pole, leaving a gray space that key leaders don’t quite know how to fill.

She concluded:

“It's really hard to say we're doing things for the future if that future doesn't include children. They may just be 20% of our population, but they are certainly 100% of our future. So let's design with intent and not by default.”