Professional negligence: who is responsible when AI gets it wrong?

Article05.05.20266 mins read

Key takeaways

AI hallucinations increase professional negligence exposure

Unverified AI errors can trigger risks and liability for professionals.

Responsibility cannot be delegated to AI

Professionals remain accountable for accuracy regardless of AI use.

Courts demand oversight and verification

Judges and regulators require safeguards, supervision and accuracy checks.

The scale of AI related errors is becoming increasingly clear, and it is reported to be estimated that more than 1,350 cases worldwide have referred to inaccurate or hallucinated AI generated content.

Against this backdrop, AI has become an increasingly common part of day-to-day professional practice. Solicitors, barristers, surveyors and other professional advisers are now regularly using AI tools to assist with research, drafting and analysis of documents. When used properly, these technologies can improve efficiencies and reduce costs for clients. However, where AI tools are used without adequate supervision or verification, or where the output is relied upon without question, significant risks can arise for the professionals that use them.

Of particular concern are so-called AI “hallucinations”, where generative AI systems produce information that appears credible but is in fact wrong. In these situations, responsibility rests with the professional, not the AI tool, who may be exposed to disciplinary action and, in appropriate cases, professional negligence claims.

This position is reflected in a number of recent regulatory and industry publications, which make clear that professionals are expected to understand both the benefits and the limitations of AI tools and that they have proper safeguards in place to ensure accuracy.

The Law Society’s ‘Shaping the Future of Agentic AI in Legal Practice’, the Civil Justice Council’s Interim Report on the ‘Use of AI for Preparing Court Documents’, and the RICS’ new professional standard guide ‘Responsible Use of AI in Surveying Practice’ (all published and/or implemented in the first half of 2026) consistently emphasise that professionals remain fully responsible for the accuracy of their work, regardless of whether AI tools have been used in its preparation. We have reported on the CJC’s Interim Report and Consultation here.

The courts in England and other jurisdictions have reinforced these principles with increasing clarity. In ‘Ayinde v London Borough of Haringey [2025] EWHC 1383 (Admin)’, the High Court criticised legal representatives for relying on AI generated material containing fictitious authorities and warned that lawyers who do not comply with their professional obligations in this respect risk serious sanctions, underlining that AI is a tool and not a substitute for professional judgment.

Dame Victoria Sharp noted that: “Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.”

We have reported on the decision in Ayinde in more detail here.

Similar concerns were evident in ‘Elden v Revenue and Customs Commissioners [2026] UKFTT 41 (TC)’, where the tribunal imposed enhanced verification requirements following inaccuracies within case summaries that had clearly been prepared by AI. The tribunal’s response underlined that a failure to check and verify information will not be excused simply because AI tools were involved.

Where professionals fail to properly supervise, verify or disclose their use of AI and loss results, those failures may give rise to professional negligence claims, notwithstanding the involvement of new technology, as the liability remains with the professional concerned.

Hill Dickinson has extensive experience in handling professional negligence disputes and can assist where concerns arise about the standard of service provided by a professional, including in cases where the use of AI may have contributed to an error or resulted in loss.

Your content, your way

Tell us what you'd like to hear more about.

Preference centre

Related views