Artificial intelligence tools are already transforming the way construction and infrastructure disputes are run, from drafting submissions to reviewing thousands of documents in seconds. The Chartered Institute of Arbitrators (CIArb) has released its Guideline on the Use of AI in Arbitration (2025), a timely and much-needed framework for practitioners, arbitrators, and parties grappling with the integration of AI into arbitral proceedings. As AI tools increasingly feature in legal research, document analysis, and even decision support, the Guideline sets out a cautious but constructive path forward.
The Guideline doesn’t try to regulate AI in the abstract. Instead, it’s designed to help parties, counsel, and arbitrators understand where the technology can add value, and where it might cause problems, including procedural unfairness or even the unenforceability of an award.
Why this matters
The Guideline comes at a time when AI is moving fast and adoption is uneven. In some arbitrations, one party may be using advanced tools to streamline document analysis or predict outcomes, while the other is relying entirely on manual processes. That imbalance creates obvious concerns about fairness, transparency, and the integrity of the process. It also raises real legal risks, especially if generative AI is used in a way that introduces factual errors or “hallucinated” authorities into the record.
The CIArb Guideline doesn’t ban AI use, but it urges caution. It emphasizes that AI is not a substitute for human judgment, particularly the judgment of the tribunal, and that its use must be open, proportionate, and tailored to the circumstances of each case.
Key takeaways for arbitration practitioners
The Guideline offers a few clear signals about where the profession is heading.
First, disclosure matters. If a party uses AI in a way that could affect the evidence, analysis, or fairness of the proceeding, there’s likely an obligation to disclose it. That includes disclosing what tools were used, for what purpose, and whether the output was independently verified. The tribunal can also require that kind of disclosure through a procedural order.
Second, party autonomy still applies. If the parties agree on how AI can be used, for example, that it’s acceptable for document review but not for legal argument, that agreement will generally govern. The Guideline includes a model agreement parties can adapt and incorporate into their terms of reference.
Third, arbitrators need to tread carefully. While they can use AI tools to support administrative work or sift through voluminous records, they’re cautioned against relying on AI for legal reasoning or outcome prediction. Any use must be disclosed to the parties if it could materially influence the award, and the tribunal remains fully responsible for its decision-making.
Finally, context is key. A tribunal in a high-value, document-heavy infrastructure dispute might reasonably allow sophisticated AI-assisted review tools, especially where both parties have access to similar technology. But the same approach might not be appropriate in a smaller matter involving self-represented parties or in a jurisdiction with restrictive data privacy rules.
What this means in practice
For Canadian arbitration counsel, the Guideline is a useful reference point: not binding, but persuasive. It reinforces that AI tools can be helpful, but their use must be transparent, ethical, and consistent with the parties’ procedural rights. It also flags the growing expectation that counsel understand how these tools work and how they should (or shouldn’t) be used in a hearing context.
At a minimum, practitioners should be prepared to:
- explain whether and how AI was used in preparing evidence or submissions,
- respond to questions from the tribunal about the reliability of those tools, and
- adapt quickly if the use of AI becomes a contested issue in the proceeding.
The days of treating AI like a “black box” are over. As tools evolve, and expectations of procedural efficiency rise, arbitration practitioners must remain conversant not only with AI’s capabilities but also with its constraints. Whether you’re a party, counsel, or arbitrator, the message is the same: use AI where it helps, but don’t hide the ball.
While AI may never replace the arbitrator’s pen, it is already reshaping how we prepare, present, and manage disputes. The challenge is to harness its potential without compromising fairness, transparency, or enforceability, a task made easier with guidance like this.
Transparency and accountability remain cornerstones of arbitral fairness, and the more AI is involved, the more deliberate we all need to be.