Skip page header and navigation

AI - Judicial Guidance

AI - Judicial Guidance

Much of the press in general, and particularly the legal press during 2023 focussed on the implications of AI. In December 2023, a number of the most senior judges in the UK (including Sir Geoffrey Vos - Master of the Rolls and Lord Justice Colin Birss - Deputy Head of Civil Justice) signed off on a document entitled ‘Artificial Intelligence (AI) – Guidance for Judicial Office Holders’. The contents are of interest in terms of the general direction of travel of AI within the legal profession but also to everyone engaging with the legal system whether in their business or home lives. The introductory comments note “The use of Artificial Intelligence (‘AI’) throughout society continues to increase, and so does its relevance to the court and tribunal system. All judicial office holders must be alive to the potential risks. Of particular importance, as the guidance document emphasises, is the need to be aware that the public versions of these tools are open in nature and therefore that no private or confidential information should be entered into them.”

Key Points

The guidance sets out key risks and issues associated with using AI and some suggestions for minimising them:

  1. Understanding AI and its applications

    The guidance notes that judicial officeholders should ensure they have a basic understanding of the capabilities and potential limitations of any AI tools before use, specifically:
      

    1. AI chatbots: Chatbots do not provide answers from authoritative databases – the answers are generated using an algorithm based on trained data - this means the output generated is what the model predicts to be the most likely answer which is not necessarily the most accurate answer.
       
    2. Confirmatory only: AI tools should not be relied upon to conduct new research but may be a way of obtaining non-definitive confirmation of matters.
       
    3. Input: Even with the best prompts, the information provided by an AI tool may be inaccurate, incomplete, misleading, or biased.
       
    4. US/English Law: Large language models (LLMs) are typically trained on the available published material available on the Internet, which may result in AI chatbots returning examples of U.S. law (of which there are more publications), rather than English law.
       
    5. Summary – take care when using.
       
  2. Uphold confidentiality and privacy
     
    This is probably the most important issue identified. It must be remembered that all information submitted to a public AI chatbot is retained as public data and used in future to form the basis of queries from other users, unless the chat history is specifically disabled. Information should only be entered into a public AI chatbot that is already in the public domain. In anticipation of a slip-up the guidance provides that any personal data disclosed by mistake should be reported by the judicial officeholder as a data incident.
     
  3. Ensure accountability and accuracy
     
    It goes without saying that any AI results should be checked before they are used or relied upon.  Specific reference is made to AI tools inventing fictitious cases, citations, or quotes, or referring to legislation, articles or legal texts that do not exist. The guidance was issued before the decision in Harber -v- Commissioners for His Majesty’s Revenue & Customs [2023] UKFTT 1007 (TC) where a party in a tax appeal provided the Tax Tribunal with nine previous rulings in support of their case, all of which turned out to be ‘hallucinated’ by an artificial intelligence program. There have been other similar reported instances in the US.
     
  4. Bias
     
    AI tools based on LLMs generate responses based on the dataset they are trained upon. Information generated by the AI tool will inevitably reflect errors and biases in its training data. This should be taken into account when reviewing any AI results. 
     
  5. Security
     
    Security best practices should be followed. This includes using work devices rather than personal devices and work e-mail addresses to access AI tools.
     
  6. Responsibility
     
    Again, an obvious point but judicial office holders are personally responsible for all material produced in their name. It is recognised that generative AI could be a useful tool.
     
  7. Be aware that other court/tribunal users may have used AI tools
     
    There is a reminder that all legal representatives are responsible for the material they put before the court and have a professional obligation to ensure it is accurate and appropriate. Legal professionals may use AI but should refer to its use and confirm that the results have been be independently verified. A further warning is given regarding the handling of  forgeries, and allegations of forgery, involving varying levels of sophistication and in particular that judges should be aware of this and the potential challenges posed by deepfake technology.

Conclusion

This guidance is described as the first step in a ‘suite of future work’ to support the judiciary in its use of with AI. More of such guidance is likely in the future as AI technology advances. The guidance demonstrates the adaptability the courts to the use of technology with a degree of caution. AI tools including technology assisted review (TAR) and automated contract drafting are well-established and areis becoming increasingly standard practice in UK legal practice. Inevitably, we can expect to see more cases/issues involving AI being referred to the courts although we are not in the realms of AI being used to determine disputes yet. The current guidance is helpful and should be considered whenever an AI use/application is being utilised in court. 

For further information on this topic, please contact Paul Walsh or Moya Clifford.

Technology law is central to any large-scale business transaction or dispute. Whether you are a technology organisation, or an organisation reliant on technology, as your organisation expands, the need to protect your technologies, your brand, your products and your data services grows.

We are trusted advisors to a wide range of clients within the technology sector. On a wider scale, even more of our clients are heavily reliant on technology - and trends dictate that this is increasing.

Our Tech sector can meet the full scope of your requirements - be it a complex corporate transaction, a data protection or reputation issue, outsourcing requirements, a regulatory issue, or a contracting query.