Confidence in AI in general practice
Confidence in AI in general practice
In this editorial piece, Annabelle Painter (Imperial College London, UK), Michael Nix (Leeds Teaching Hospitals NHS Trust, UK) and Jay Verma (Shakespeare Health Centre, Hayes, UK) discuss how design, implementation and education can influence confidence in AI for decision-making in primary care and family medicine.
AI on the horizon in general practice
General practice will be among the three NHS specialities most affected by AI according to an AI roadmap analysis recently published by Health Education England (HEE, London, UK) . A common AI use case within general practice is administrative task automation, where many GPs agree that AI can reduce the burden of paperwork and provide clerical assistance . However, AI use cases increasingly involve clinical and decision support, for example for in triage, assisted diagnosis, care management – including personalized self-management – and proactive detection of disease and patient deterioration . Faced with ever-increasing workloads and growing concerns about GP burnout , workload-relieving AI solutions are attractive to some. However, scepticism remains amongst many GPs about the role of technology in primary care clinical decision-making, which is felt to require quintessentially human skills . Successful, safe adoption of AI clinical decision-making support in general practice will require obtaining the trust and confidence of frontline GPs.
What determines clinicians’ confidence in AI utilized in clinical decision-making?
HEE and the NHS AI Lab (London, UK) have published a collaborative research report entitled ‘Understanding Healthcare Workers’ confidence in AI’, exploring factors driving clinical confidence . This differentiates between factors that establish trustworthiness in an AI technology and factors affecting clinician confidence during AI-assisted clinical decision-making. Factors affecting trustworthiness of AI technology The report suggests that trustworthiness of an AI technology is a foundation for clinical confidence and is supported by governance – regulation, validation, guidelines and liability – and implementation – strategy, technical implementation and systems impact. Trustworthiness of AI technology could be strengthened by providing assurances in these domains. Governance factors include meeting appropriate standards for safety and efficacy, supported by clinical evidence of performance in diverse primary care settings, alongside guidelines for clinical utilization. It should also be clear who would be held liable if the AI-assisted pathway were to cause patient harm. Implementation of technology should ensure effortless interoperability with existing systems, maximize user-friendliness and reduce clinician burden. Trustworthiness of AI technologies also depends on potential risks being carefully considered. This should include assessment and mitigation of model bias against different patient groups in the local population as well as provision of guidance on counseling patients about the risks and benefits of AI. A non-AI fallback pathway should also be readily available for cases of AI failure or where patients are uncomfortable with AI-augmented care. Factors affecting confidence during clinical decision-making Once the trustworthiness of an AI technology and its implementation are established, primary care clinicians will need to develop their own confidence in weighing up AI information against traditional clinical data sources. Clinicians need appropriate confidence in AI-derived information on a case-by-case basis, with a critical eye for situations where the AI output does not align with their clinical intuition. Algorithms have the potential to influence the psychology of human clinical decision-making through cognitive biases. However, clinicians may not always be conscious of their biases, even in their routine clinical decision-making. Some of the most common cognitive biases clinicians are susceptible to when utilizing AI-derived information for clinical decision-making include:
- Automation bias – accepting AI predictions uncritically
- Aversion bias – being overly sceptical of AI, despite strong evidence supporting its performance
- Alert fatigue – ignoring AI alerts, due to a history or perception of too many incorrect alerts
- Confirmation bias – accepting AI predictions uncritically when it agrees with the clinician’s initial impression
- Rejection bias – rejecting AI recommendations without due consideration, when they contradict the clinician's intuition
Research suggests that there are aspects of primary care work that may make GPs particularly vulnerable to biases and that this could lead to inappropriately high levels of confidence in AI. For example, clinicians appear to be more trusting of AI recommendations when they are outside of their expertise. Generalists, junior clinicians and those with low confidence in their own clinical ability are least likely to question AI technology . In addition, when clinicians feel under time pressure or need to make urgent decisions, the tendency towards automation bias also increases . Risk aversion, driven by fear of missing diagnoses, litigation and complaints further increase vulnerability to automation bias in the context of false-positive AI results. As generalists faced with significant time pressure and growing numbers of patient complaints , GPs may be particularly susceptible to automation and confirmation bias. These findings are supported by research on GPs utilizing AI technology for skin cancer detection. When the AI provided erroneous information, only 10% of GPs were able to correctly disagree with the AI diagnosis . Successful adoption of AI in general practice will require assurances that clinicians are safeguarded against inappropriately high confidence in AI technology. Methods of achieving this include prioritizing further research into human-AI factors, co-designing AI technology with frontline clinicians as well as carefully considered implementation and workflow integration. Robust training and safety protocols will also be required. The importance of explaining and justifying decisions to patients is emphasized by the GMC in Good Medical Practice . AI products are often ‘black boxes’, which cannot explain how they make their recommendations. Efforts are being made to ‘unlock the black box’ through explainable AI (XAI) techniques, which aim to produce a ‘human-like explanation’, as this has been shown to increase the confidence of clinicians utilizing AI technology . However, emerging evidence suggests current XAI approaches may offer only a misleading ‘veneer of explainability’ and are not yet of an appropriate standard for clinical utilization .
How do we achieve clinician confidence in AI?
Confidence in AI utilized within general practice and family medicine will ultimately depend on three key areas: the trustworthiness of the technology, the robustness of the implementation and the critical appraisal skills of the clinician making AI-assisted clinical decisions. A strong understanding of the human–AI interaction and the strengths and limitations of AI-assisted clinical decision-making will be vital and will need to be addressed through considered product design and effective clinician education. Human–AI interaction and the impact of cognitive biases in clinical decision-making is still poorly understood. Further research into the impact of the format, timing and presentation of AI outputs on clinical decisions is needed. Co-designing AI technology with primary care clinicians and patients can help ensure the product solves meaningful clinical problems, minimizes the risk of clinical errors, maintains or improves efficiency and is valuable to users. Systems changes that accompany any new technology in primary care also deserve particular attention. Confidence can be enhanced through careful workflow design and resourcing. Deployment should ensure technology improves clinicians’ working experience rather than adding complexity, workload or administrative burden. Education will play a crucial role in developing clinical AI confidence at every level from design, validation and implementation to clinical utilization. Later this year, the healthcare workforce training needs with regards to AI technology will be examined in detail in a second report by HEE and the NHS AI Lab. Disclaimers:Jay Verma is a director of Data Care Solutions (Hayes, UK), a healthcare consultancy company that supports primary care providers with healthcare data analytics and service provision.
The opinions expressed in this feature are those of the interviewee and do not necessarily reflect the views of Future Medicine AI Hub or Future Science Group.
.avif)
.avif)
.png)



