Key takeaways
Indirect Age Discrimination in Redundancy Scoring
Qualification-based criteria can disadvantage older workers unlawfully.
Poor Consultation Undermines Fair Dismissal
Rushed meetings and destroyed notes led to tribunal criticism.
Clear Guidance and Training Are Essential
Managers need boundaries and best practice to avoid costly claims.
"AI is only as smart as the lawyer asking the question.”
The 2025 International Arbitration Survey, co-authored by Queen Mary University of London, was published on 1 June 2025 and is titled “The path forward: realities and opportunities in arbitration.” (the 2025 Survey).
Among other things, and based on the responses received from participants, the 2025 Survey predicts that the use of Artificial Intelligence (AI) in international arbitration will grow significantly in the coming years. The principal drivers are saving party and counsel time, cost reduction and reduction of human error. The principal obstacles are concerns about errors and bias, confidentiality risks, lack of experience and regulatory gaps. Nonetheless, the general consensus was that over the next five years, international arbitration users will adopt, and adapt to, AI, because of its potential for efficiencies.
The enthusiasm for greater use of AI in international arbitration among respondents to the 2025 Survey was, however, tempered by the desire for transparency, clear guidelines and training on the use of AI.
Additionally, the theme of London International Disputes Week, which took place between 2 to 6 June 2025, was “Innovation in Dispute resolution: navigating the global risks”. Among the many sessions that took place, discussions around AI in dispute resolution provided a clear message that the use of AI tools in international arbitration is not a distant prospect, but a reality.
It is, therefore, worth considering some of the issues arising for the international arbitration community as it prepares to rise to the AI challenge.
Definition of AI system
In 2019, the Organisation for Economic Co-operation and Development (OECD) published its Recommendations on AI (AI Principles). The definition of an AI system in the AI Principles was subsequently updated in an Explanatory Memorandum dated March 2024 (OECD Artificial Intelligence Papers No.8) and is now:
"a machine-based system that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The OECD decided not define AI but rather to focus on AI system, which was considered to be a more tangible and actionable concept, especially in a policy making context. The OECD definition has been adopted by the Chartered Institute of Arbitrators, among many others.
The EU Artificial Intelligence Act, which entered into force in August 2025, also provides a definition of AI system, as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;”
Use of AI in international arbitration
Pros
According to the findings of the 2025 Survey, use of AI in international arbitration is beginning to boom. Even those who have never used AI for arbitration tasks largely expect to incorporate it into their future practice.
The most common use of AI in arbitration was found to be for conducting factual and legal research. Furthermore, the ability to use AI to review and analyse vast numbers of documents efficiently was widely acknowledged. AI document review tools helped to expedite the management of huge amounts of data that would otherwise take weeks to process. Other labour-intensive tasks, such as preparing chronologies and summarising witness statements and depositions can be made more efficient and cost-effective through use of AI tools.
AI adoption in data analytics (i.e. converting raw data into actionable insights) was reported to be more moderate but was expected to increase because it boosted efficiency, particularly for organising large datasets and identifying trends. For example, an appropriate tool could analyse the experience of arbitrators as stated in their CVs and so help prepare a list of suitable arbitrators much more quickly.
Other uses of AI in international arbitration that may potentially increase, if concerns are appropriately addressed, in the next five years, are: drafting correspondence; drafting submissions; and evaluating legal arguments.
Cons
Some stakeholder concerns and comments, as highlighted in the 2025 Survey, included the following:
Accuracy. Perceived linguistic or cultural biases with some AI platforms.
The risk of uncertain quality of both data sources and AI-generated content. “Garbage in, garbage out?”
Confidentiality concerns with open-source AI, i.e. AI systems that are made freely available to the public for use, study, modification and sharing.
Data security. Risk of data breaches.
When drafting correspondence, limited capacity for AI to capture the tone necessary in arbitration.
Limited use in drafting complex legal submissions. AI may be useful in preparing a first draft, however.
AI should not be relied on to evaluate legal arguments. AI may oversimplify complex legal arguments. Current AI applications are thought to lack reasoning capabilities.
Hallucinations i.e. errors in the results produced by AI tools when the AI model does not have access to accurate information or prompts (the English courts have repeatedly warned against the use of AI without human oversight and verification – see, for example Ayinde -v- London Borough of Haringey and Al Haroun -v- Qatar National Bank [2025] EWHC 1383 (Admin)).
Ultimately, respondents to the 2025 Survey thought that AI was a tool, no more no less. It should assist but should not replace human judgement.
This view may allay the concerns of junior lawyers and paralegals who may otherwise justifiably worry that they will be replaced by AI.
General concerns
“An arbitrator must know their case better than anyone else. AI cannot replace that fundamental duty.”
Some general concerns regarding the use of AI in international arbitration (many of which are applicable to use of AI in dispute resolution generally) are:
Competitive disadvantage where parties or their lawyers do not have the same resources or equivalent access to AI tools: “divisions between the haves and the have nots”.
Lack of experience or knowledge with AI and the need for adequate training and guidelines to avoid misuse.
Client expectations for AI use. Do they want it or not? Are they prepared to pay for it or not?
The risk of more post-award challenges, particularly where arbitrators have used AI to draft awards. The concern that an arbitral tribunal may have outsourced its adjudicative role, was highlighted in a recent case filed in the US federal court, LaPaglia -v- Valve Corp.
Lack of regulation or guidelines about use of AI and disclosure of its use.
Regulation
“The world wants to regulate AI but does not quite know how.”
This is a key issue. The EU has taken a proactive regulatory approach to AI. The EU Artificial Intelligence Act that came into force in August 2024 aims to ensure that AI systems are safe and respect fundamental rights, while also fostering innovation.
The UK does not as yet have general statutory regulation of AI although the UK Government has announced that it is taking steps towards creating a specific framework around responsible AI and use. The major concern for government and businesses is not to stifle innovation and technological advances through overregulation.
Arbitral tribunals have wide discretionary powers to manage arbitral proceedings – see, for example, Article 17 of the UNCITRAL Rules – and these powers can accommodate AI use. However, more specific frameworks are required and, as a result, a number of arbitral institutions and organisations have published guidance on the use of AI in arbitration. Among these are:
Silicon Valley Arbitration & Mediation Center (SVAMC) published a tech-focused set of principles in April 2024.
The SCC published a Guide to the Use of AI in Cases Administered under the SCC Rules in October 2024.
The Vienna International Arbitral Centre (VIAC) published a Note on the use of AI in arbitral proceedings in April 2025.
Also in April 2025, the CIArb published its Guideline on the Use of AI in Arbitration.
These guidelines or “soft law”, whilst principally non-binding, are useful principles for both arbitrators and arbitration users.
However, the rules of most of the major arbitral institutions (including ICC, LCIA, SIAC and HKIAC) are, at present, silent on the use of AI by arbitrators,
Moving forward
In February 2025, the Master of the Rolls, Sir Geoffrey Vos, gave a speech at the Lawtech UK Generative AI event, in which he advocated for a balanced, informed, and ethical integration of AI into the UK legal system. He also stressed the importance of improving access to justice and preserving fundamental rights in an AI-driven future.
At the highest level, the UK legal sector has accepted that AI is fundamental to the future of legal services and legal process. Whilst ethical and practical concerns have been voiced, it is to be anticipated that with time, these concerns will gradually be addressed.
Certainly, many of those participating in the 2025 Survey were optimistic. One view was that at a particular point, confidentiality concerns would be alleviated, and AI would become a commercial advantage for parties. AI is expected to revolutionise the way in which arbitrators work, making arbitrations faster and cheaper. It will be interesting to see what the next five years bring.


