Key takeaways
Avoid using open generative AI for confidential privileged information
Overall direction of travel is to refrain from using any form of Generative AI tool to process confidential privileged information.
Guiding clients to keep privileged material away from open AI
Legal professionals should advise their clients to refrain from using open Generative AI tool to process their own confidential/privileged information.
Stay updated with official generative AI guidance
Regularly review official guidance relating to use of Generative AI tools.
We reported recently on the decision of the Upper Tribunal (Immigration and Asylum Chamber) in Munir -v- Secretary of State for the Home Department [2026] UKUT 81 (IAC) which was the first case in England and Wales to address directly whether using public AI platforms can jeopardise legal privilege and the answer was a resounding yes - The Tribunal held that 'Uploading confidential documents into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege.'
This is an issue which affects other jurisdictions and businesses with operations outside the UK may need to be aware of the position in other jurisdictions too.
In the US, the issue was considered recently in United States -v- Bradley Heppner (US -v- Heppner No 25 Cr 503 (SDNY)).
Same approach or different?
In Heppner, the defendant independently input into an AI tool (Claude by Anthropic) information he had learned from his lawyer. The debate surrounded whether or not the US Government should or should not be able to see the content of the AI output in response to the inputted material, where such content was claimed to be protected by privilege. An important factor was that there had been no lawyer involvement in directing the use of AI in the case. As with the Munir case in UK, the stance taken was that content generated by such tools was not protected by privilege. The reasoning related mainly to the issue of confidentiality, namely that sharing privileged information with a generative AI tool is equivalent to sharing it with any other third party and privilege is waived. As there is little available or easily accessible information on how such information is stored within these tools, confidentiality a cornerstone of privilege as a principle is arguably infringed on this premise alone. The position in the US could well be different with lawyer directed use in a confidential (closed-system) AI framework.
Key developments to expect
These cases have undoubtedly set the wheels in motion for what is likely to be an increasingly contemplative, deliberated topic regarding confidentiality that questions the authority of these tools and their position to principles of legal privilege. We now have decisions that illustrate that both the UK and US stance is clear and aligned namely that content generated through unsupervised use of open AI tools is not protected by the doctrine of privilege.
As AI is a global phenomenon, these decisions are likely to be instructive across other jurisdictions. In the interim and whilst anticipating further clarification, it will be advisable to proceed with caution, abiding by the assumption that no privilege should be assumed with any public AI tools.
Businesses operating worldwide should prevent sharing information conveyed in either lawyer-client communications other privileged materials with any one or any tool that exists outside of those traditional and protected relationships. Further, despite perhaps displaying itself as a confidential, pseudo-autonomous tool that can safely store information, nothing should be inputted into it unless the person controlling the input of material would be satisfied with its later disclosure and care should be exercised at all times.
Key guidance for clients, key guidance for you
We should not really be surprised by these decisions. In the UK in particular, the Official Guidance on Artificial Intelligence for Judicial Office Holders makes one thing inextricably clear. It says: ‘do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public AI chatbot should be seen as being published to all the world. Artificial Intelligence (AI) Guidance for Judicial Office Holders. The decision in Munir was merely a reflection of this position.
It is very clear that no confidentiality should be assumed. Even if all legal professionals abide by this notion, privilege could still be waived if a business is unaware of the material risks and unknowingly discloses information relating to their proceedings to an AI tool, as was the case with Heppner. There are many scenarios which are still to be considered/explored – what if a party’s user information is obtained by a supervening event, a targeted cyber-attack, or in the absence of choosing to opt-out of certain user settings due to having insufficient knowledge. For example, an AI platform’s terms may not be readily apparent, and may inexplicitly allow a user’s conversations to be retained, meaning that the data is stored on external servers. Issues around AI use and privilege are likely to continue to develop, so ensure care is taken using AI platforms and monitor further official guidance in this area.
For further information on this topic please contact Martha Allen and Charlotte Myers.
This article was co-authored by Paralegal, Charlotte Myers.

