No Trust, No Scale - What AI in the NHS Needs Next
No Trust, No Scale - What AI in the NHS Needs Next
The advancement of AI in the NHS has been a wild ride. We are already at a point where patients are asking how it is being used in their care. In response, they deserve solutions with earned trust and clarity about what these tools are doing, what they are not doing, and how it improves their health outcomes.
Yet much of the perceived impact remains locked in pilotsand proofs of concept. The effort to balance innovation with consent, accountability, and data governance has played out publicly, and at timesuncomfortably.
Last year, we saw what happens when innovation outrunsgovernance: excitement, uneven adoption, confusion, and no small amount ofchaos. But it also gave us the evidence we’ve been waiting for - clinicians will indeed gravitate towards AI technology, on behalf of their patients, if it’s safe, regulated, and backed by the right assurances.
Uncertainty to possibility
We cannot have another year of talking about potential or faltering in the friction of deployments. We need to move towards consolidation: where we clarify the guardrails, formalise governance, and make AI a dependablepart of clinical work - not a source of anxiety through ambiguity. We must strive for a landscape that empowers suppliers to develop tools, and healthcare organisations to buy them, secure in the knowledge that safety has been built in from the start. Without regulatory clarity, we risk slowing progress andundermining both patient and clinician confidence. Uncertainty cannot continueto outweigh possibility.
The prize is clear. Used well, AI can optimise documentation, support waiting list and backlog reduction, spot individual health trends earlier, and relieve the administrative burden that continues to pull clinicians away from their patients. Above all, it can preserve the timeand cognitive bandwidth for what only clinicians can do: clinical reasoning,nuanced decision-making, and compassionate conversation.
In my experience, clinicians tend to fall into three broad groups: those excited by AI’s potential, those deeply apprehensive, and those cautiously reserved. Many use technology extensively in their personal lives yet hesitate to adopt it at work. That hesitation is not resistance to progress, it’s a rational response to risk in a setting where the stakes are high.
A July 2025 study found that 80% of GPs wanted more training to understand AI tools, even as some reported using non-medical-grade technologies for clinical tasks. That gap is concerning. It is not just a skills gap; it’s a risk literacy gap. Clinicians need to understand not only what a tool can do, but where it can fail, when toquestion it, and where accountability ultimately sits.
Therefore, training should not be reduced into a one-offe-learning module or a vague “AI awareness” session. This education needs to be continual, practical, and role-specific - intentionally built into professional routines with protected time and supported by high-fidelity feedback solearning translates into safer practice.
Making AI safe in clinical practice
We will undoubtedly see an acceleration of AI deployments across NHS trusts. Hybrid AI-clinician workflows will become much more common and use cases will expand beyond documentation into areas such as triage, diagnostic support, remote monitoring, and population health.
With so much at stake, how can we shift away from institutional apprehension towards the confidence needed to make this work atscale?
We must remove as much friction, uncertainty and ambiguity as possible.
There is no simple solution, but a unifying principle, similar to that taken by the CQC, can help: prioritising patient safety above all else.
So, what does “patient safety” look like day-to-day with AI? Start with transparency built into everyday practice, not just in policy documents:
1. Clear audit trails showing when AI suggestions were viewed, verified, overridden, or ignored.
2. Shared incident reporting for AI-related near misses - so we learn as a system, not as isolated sites repeating the same mistakes.
3. Iteratively documented change - when the dataset, model, prompt, or workflow changes, the clinical safety story changes too.
4. A culture of safety is built through transparency, candour, and learning. If we want trust, we need the habits and processes that make trust rational.
The toughest ethical challenges around bias, explainability, and equity of access will persist. However, greater transparency in how systems are built and used - combined with cross-disciplinary education - will strengthen our ability to address them.
Partnership and proactive compliance
For suppliers, proactive compliance will become the new normal. Those who treat regulation as a box-ticking exercise risk accruing significant “compliance debt”, only to face costly remediation later. Compliance debt not only slows regulatory approval, but it erodes clinical trust, prolongs procurement cycles, and stalls deployment even when a product is cliniically sound.
We are also moving toward higher levels of scrutiny as tools shift from passive information support into active influence on decisions. As that happens, suppliers who engage early with local clinical safety, information governance, and infrastructure requirements will outperform those who rely solely on top-down national standards. Responsible, personalised engagement will matter - because healthcare is local even when policy is national.
For their part, NHS organisations must be savvy customers and strong advocates for their patients. They should seek out partners that:
1. Demonstrate interoperability as a cornerstone for longevity.
2. Offer a scalable, realistic approach to implementation (not just demos).
3. Are transparent about reciprocal technical needs.
4. Keep patient outcomes intentionally at the centre of design.
Forward-thinking organisations are already showing thatthere is a path ahead. Some have had measurable benefits from mature technologies - for example, achieving up to a 90% reduction indictated-to-approved time for outpatient clinic letters through speech transcription. These gains do more than save minutes: they create the operational headroom and confidence to interface newer AI tools safely and sensibly.
From debate to delivery
Although governance is the most visible barrier to realising AI’s potential, we still cannot afford to overlook the foundations. Without long overdue investment in underlying infrastructure, even the safest, most compliant tools will fail to land in day-to-day practice.
And when AI fails because the underlying infrastructure cannot support it, clinicians are unlikely to blame the network or hardware. They blame the technology itself. And over time, that erodes trust.
The coming months will be pivotal: a period of consolidation and gainful traction after a year of friction. But trust will not appear organically through experimentation. It must be deliberately cultured - through design, governance, training, infrastructure, and partnership.
If we want AI to become a dependable and trustworthy partner in patient care, the route is clear: establish clinical guardrails, demand robust compliance from suppliers, invest in workforce education, and ensure the surrounding digital ecosystem is fit for purpose. Then, and only then, can we progress from the chaos of debate into real-world delivery that inches toward its long-term promise of a genuine shift in modern medicine.
.png)
.png)
.png)