Promoting tech for good innovators creating a positive impact

AI in Healthcare: A Surplus of Solutions, a Deficit of Impact

Scientist in a lab gown interacting with equipment in a modern laboratory setting.

Artificial intelligence (AI) is now firmly embedded in healthcare discourse. Across diagnostics, imaging, drug discovery, clinical trials, operational analytics, and administrative automation, AI is repeatedly positioned as a transformative force capable of alleviating pressure on overstretched systems. The number of proposed solutions is vast, demonstrations are increasingly polished, and claims are often ambitious.

Yet a fundamental question remains insufficiently answered:

What care is AI improving, and at what cost?

This question is particularly urgent for publicly funded health systems such as the UK NHS, where workforce shortages, rising austerity, increasing clinical acuity, and structural backlogs are not abstract challenges but daily operational realities. In such contexts, adoption is not driven by technical novelty, but by demonstrable system-level value.

A review of the current evidence base suggests a growing disconnect between AI capability and real-world clinical impact.

A Mature Technology still struggling to embed

The literature consistently shows that AI capabilities in healthcare are both broad and technically advanced. One comprehensive review catalogues application across drug discovery, clinical trials, patient monitoring, diagnostics, robotics, and genomics, noting that AI systems can outperform humans in narrowly defined tasks (Shaheen, 2021). Similarly, a large narrative review synthesising 44 peer-reviewed studies highlights improvements in diagnostic accuracy, decision support, and administrative efficiency as recurring potential benefits of AI (Chustecki, 2024).

However, most AI systems remain preclinical, experimental, or poorly integrated into routine care. Despite extensive research activity and widespread piloting, operational adoption at scale remains limited. Khan et al. describe healthcare AI as technologically mature but systemically immature, struggling to transition from proof-of-concept into sustainable clinical deployment (Khan et al., 2023).

Technical capability alone does not translate into clinical or operational impact. A system may be accurate, elegant, and innovative, yet still fail to meaningfully change how care is delivered.

Problem Spotting without Problem Solving 

A recurring theme across the literature is that many AI tools excel at describing clinical risk without resolving it. Risk stratification dashboards, predictive alerts, and pattern recognition systems frequently repackage information that clinicians already recognise, albeit at greater scale and speed.

Insight, however, is not intervention.

Chustecki notes that AI-driven decision-support tools often increase cognitive load when they do not automate downstream action, particularly in environments already characterised by alert fatigue and workflow saturation (Chustecki, 2024). Similarly, Khan et al. observe that AI layered onto existing processes often introduces parallel workflows, additional validation steps, and new governance requirements, thereby adding friction rather than reducing it (Khan et al., 2023).

In practical terms, AI systems that stop at “flagging,” “predicting,” or “informing” raise unavoidable operational questions:

  • Who acts on this information?
  • When do they act?
  • With what time, authority, and capacity?

If the answer is “the same overstretched team, operating within the same constraints,” then the AI has not solved a problem. It has created one.

Replace or enhance: the adoption threshold

Healthcare systems adopt technology when it absorbs work, reduces risk, or creates capacity.

AI that merely advises, annotates, or monitors without removing tasks struggles to justify its presence. Shaheen acknowledges that while AI can enhance clinical knowledge and reduce cognitive bias, it rarely replaces human effort at scale in current implementations (Shaheen, 2021). Chustecki reinforces this finding, identifying economic value (such as cost avoidance, reduced length of stay, or administrative automation) as a far stronger driver of adoption than model performance metrics alone (Chustecki, 2024).

This distinction is particularly salient in the NHS context. Introducing additional IT solutions that require procurement, training, governance, monitoring, and continuous validation is not neutral. Unless such systems reduce overheads elsewhere (staffing pressure, duplication, delays, or downstream harm) they exacerbate financial and operational strain (Khan et al., 2023).

In this environment, there is little room for compromise. AI must either:

  • Replace human effort, by automating administrative, cognitive, or coordination tasks; or
  • Genuinely enhance care, by measurably improving outcomes, safety, or access in ways that change clinical pathways.

Anything else risks becoming technological decoration.

Evidence that answers the wrong questions

There is a fundamental misalignment between how AI systems are evaluated and how healthcare leaders make adoption decisions. Academic and commercial AI publications focus on technical performance metrics such as sensitivity, specificity, and AUROC (area under the receiver operating characteristic curve). These measures describe how well a model can distinguish between groups.  For example, identifying patients at higher or lower risk.

While such metrics are useful indicators of technical performance, they offer limited insight into real-world clinical value. A model can achieve a high AUROC and still fail to change clinical decisions, reduce harm, release clinician time, or improve patient outcomes. In practice, these are the outcomes that matter most to healthcare systems operating under severe capacity and financial constraints.

Healthcare leaders do not make investment decisions based on statistical discrimination alone. They require evidence that an intervention alters pathways, removes work, reduces cost, or measurably improves outcomes. Performance metrics that stop at prediction or classification do not answer these questions, particularly when implementation introduces additional governance, training, monitoring, or workflow complexity.

As a result, decision-makers are often asked to invest based on technical promise rather than system-level proof. In risk-averse, resource-constrained healthcare environments, this disconnect between how value is measured and how it is experienced makes adoption slow.

Risk Redistribution is Not Risk Reduction

Governance and risk are integral to AI adoption. Concerns around data privacy, bias, explainability, accountability, ethics and ongoing monitoring recur consistently in discussions of healthcare AI and remain unresolved in many real-world deployments.

In practice, many AI systems introduce new layers of oversight rather than removing risk. Tools that rely on continuous human validation, additional governance structures, or parallel review processes tend to shift responsibility rather than reduce it. This redistribution of risk can undermine trust and slow adoption in already pressured healthcare systems. When accountability is unclear (particularly when errors occur) confidence erodes further.

Healthcare systems are far more likely to adopt technology that clearly absorbs risk and effort than tools that fragment responsibility across more people, processes, and committees.

These dynamics are well recognised in the literature, but they are most clearly understood through the realities of frontline practice and system implementation.

Conclusion: What the Healthcare AI Ecosystem Must Deliver Next

Clinicians will not adopt new technology if it does not work better than existing practice from the very first use. In environments characterised by constant change, heavy workload, and limited capacity, there is little tolerance for tools that add friction, require workarounds, or fail to deliver on their promise immediately. When a system underperforms at first contact, confidence is quickly lost and rarely recovered.

The evidence supports a simple conclusion: healthcare does not need more intelligence layered onto already strained systems. It needs solutions that either do the work themselves or materially change how care is delivered.

The next phase of healthcare AI must therefore move beyond technical achievement toward operational relevance. Less emphasis on what models can predict, and more on what they remove. Less focus on novelty, and more on whether time is saved, risk is reduced, or outcomes improve in ways that clinicians and patients can tangibly experience.

Before committing millions to another diagnostic or decision-support system, the questions that matter are pragmatic. Does this replace work? Does it remove steps from the pathway? Does it reduce admissions, delays, or harm?

This shift requires a corresponding change in emphasis.

Less:

  • “Look what we can predict”
  • “Look how clever the model is”
  • “Look how much data we processed”

More:

  • “This replaces two steps in the pathway”
  • “This removes X hours of clinician time per week”
  • “This reduces admissions, complications, or delays”
  • “This measurably improves patient experience or outcomes”

In overstretched healthcare environments, resistance to poorly performing technology is not cultural inertia; it is a rational response to change fatigue and constrained capacity.

Healthcare does not need cleverer AI. It needs technology that works, first time, and makes care easier.

Anything else is just more noise in an already crowded room.

Authored by John West and Melissa Way, 2026

References

Shaheen, M. Y. (2021).
Applications of artificial intelligence (AI) in healthcare: A review. ScienceOpen Preprints.
https://doi.org/10.14293/S2199-1006.1.SOR-.PPVRY8K.v1

Khan, B., Fatima, H., Qureshi, A., Kumar, S., Hanan, A., Hussain, J., & Abdullah, S. (2023).
Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomedical Materials & Devices, 3, 1–15.
https://doi.org/10.1007/s44174-023-00063-2

Chustecki, M. (2024).
Benefits and risks of AI in health care: Narrative review. Interactive Journal of Medical Research, 13, e53616.
https://doi.org/10.2196/53616

Picture of John West

John West

Medtech Contributor - Chief Commercial Officer, Device Access

Newsletter

Sign up and stay in the loop

Related Articles