Key takeaways
AI is becoming integral to frontline clinical decision‑making
Its growing use in triage, diagnostics and digital consultations means AI‑influenced decisions will increasingly feature in coronial investigations.
Accountability must remain clearly defined
Clinicians retain responsibility for decisions, but organisations must ensure proper governance, validation and oversight of any AI tools used in care pathways.
Explainability poses real disclosure challenges
Where AI systems function as 'black boxes', Coroners may require expert evidence to interpret how outputs informed clinical judgement and whether they contributed to the outcome.
Introduction
Artificial intelligence is reshaping delivery of healthcare services in the UK, particularly when it comes to diagnostics and triage. As NHS bodies and clinical providers increasingly utilise AI in frontline decision making, coronial inquests will need to adapt to confront the practical and legal challenges that accompany AI development. When questions around accountability, disclosure and causation arise, how will coroners’ courts respond?
Witness challenges
One of the first hurdles will be identifying who can 'speak' for an AI system. AI tools are already being used in emergency departments and GP practices to support tasks that were traditionally carried out by clinicians, for example, helping to interpret scans, highlighting possible signs of cancer or guiding triage decisions.
In primary care, online consultation and triage systems now allow patients to submit information digitally. These tools then assess the information, suggest a level of risk, and indicate whether urgent clinical follow up is needed.
This increasing use of AI is prompting questions about clinical oversight and accountability when decisions informed by these tools are scrutinised at an inquest. Where such tools have been used in patient care, coroners may need clear answers to questions such as:
Who designed, trained, or validated the algorithm?
Which clinicians relied on the outputs, and what was the expected standard of oversight?
Is the 'decision' attributable to the clinician, the organisation deploying the AI system, or the developer?
As AI becomes more integrated in frontline healthcare, these questions will only become more pressing.
Disclosure challenges
AI systems introduce disclosure challenges for inquests, as their complex underlying processes can be hard to convey to a lay audience, especially where a jury is involved.
Studies show that AI tools rely on large datasets and statistical patterns, meaning that even with full cooperation, the basis on which a system reached a conclusion cannot always be explained in a straightforward way. Where an AI system relies on non explainable (‘black box’) models, coroners may need expert evidence to bridge the gap between technical outputs and clinical decision making.
Coroners will therefore need to consider how best to obtain meaningful, comprehensible evidence about how the system was used.
Healthcare organisations deploying AI systems should ensure that audit trails, version histories, risk assessments, and usage logs are preserved and made available.
Should failures of AI trigger Prevention of Future Death Reports or regulatory referrals?
AI can arguably both reduce and create clinical risk.
NHS research shows that well designed AI tools can reduce diagnostic delays and improve urgent care responsiveness when properly monitored. However, AI powered systems that misclassify symptoms, prioritise incorrectly, or fail to flag emergencies can exacerbate the kind of delays already implicated in adverse outcomes across emergency and primary care settings.
In cases where the use of AI contributes to unsafe decisions or system failures, coroners may need to consider issuing Prevention of Future Deaths (Regulation 28) reports.
In doing so, coroners may reasonably ask:
Was there adequate governance of the AI system?
Did the organisation have appropriate safeguards in place?
Does wider systemic risk exist?
Conclusion
AI has enormous potential to improve diagnostic accuracy, accelerate triage, and support with alleviating the current pressures experienced across the health sector. However, as algorithmic decision making becomes embedded in clinical workflows, coronial inquests will face novel challenges in identifying ultimate responsibility, securing adequate disclosure, and determining when technological failures merit regulatory scrutiny or preventive recommendations.
It is important to plan for these issues now by ensuring maintenance of audit trails and documentation, assignment of clear lines of accountability, and early engagement with regulatory expectations.
We know that it can be challenging for organisations to keep on top of what this means in practice. If you would find it helpful to talk through any of the issues raised, our team is available to offer guidance.
This article was co-authored by Trainee Solicitor, Laura Clarke.
