Arbitral Intelligence: A Legal-Tech Tale
- Dr. Apoorvi Shrivastava
Mira Desai, a law professional, has always admired the prowess of technology and treated it as an enhancer, not a replacement. Be it e-discovery platforms, predictive analysis, or virtual hearing, she had welcomed everything rather than scoffing at it as traditionalists. But this time it was different. She was asked to sit with a machine in an arbitral panel for a celebrated case.
Omega has accused the Velmora government of unlawfully seizing the renewable energy infrastructure. In an institutional arbitration between Omega Energy Corp and the Republic of Velmora at the International Institute of Arbitration, not only billions were at stake, but an award would also be pronounced by Solomon, an AI arbitrator.
Trained on thousands of arbitral awards, treaties, and scholarly works, Solomon claims neutrality, speed, and consistency. It had no memory of past biases, no late-night fatigue, no political ties. It processed data. It followed logic.
At least, that’s what the general opinion was.
But Mr. Holmes, being a co-arbitrator with Mira, has a bone of contention. Mr. Holmes, being old-school, sceptical and sharp as ever, subtly mentioned over a dinner that “You can feed it all the precedents you want, Mira. But law isn’t just deduction. It’s a discretion. Hart said that ages ago, when the rules run out, someone has to choose. Who decides in the penumbra of law? Can a machine understand that?” In this case, the hard decision was handed to a machine.
With a heave and a sigh, Mira moved on but eventually started noticing that the machine was behaving oddly.
Solomon dismissed Velmora’s jurisdictional objections almost instantly. Meanwhile, it treated Omega’s claims with meticulous sympathy, flagging minor procedural lapses by the state while glossing over questionable corporate practices. It was not obvious, but Mira had seen enough hearings to know when something was off.
Mira reached out to Ethan Shah, a discreet cybersecurity expert she had worked with on a data breach case years ago. He was cautious but intrigued.
“I’ll dig,” he said. “But you know what they say about AI models like Solomon—they’re black boxes. Even developers can’t always explain how the output was reached.”
That’s what frightened her most.
A week later, Ethan called. He had found anomalies in Solomon’s training logs, patterns suggesting tampering. Some key datasets had been replaced or weighted to favour corporate claimants over state actors. The bias was not just emergent; it was more like engineered.
After this brief information, Ethan had gone AWOL!! On the other hand, Omega was pushing for a swift ruling.
Amidst this, Mira did the unthinkable during the arbitration proceeding: an arbitrator asked for an emergency suspension of the arbitration, alleging that the co-AI arbitrator Solomon had been compromised.
The entire room became tense, and Mr. Holmes decided to review the metadata provided by Ethan. Even he began to notice the pattern.
Omega’s counsel struck back, arguing there is no precedent for challenging an AI arbitrator. “The New York Convention, 1958, doesn’t prohibit non-human arbitral panels. Neither institutional rules nor the UNCITRAL Model Law on International Commercial Arbitration, 1985, addresses this. AI arbitrators are not regulated,” he declared. “This is an overreach.”
But Mira was ready.
“Precedents! First of all, the doctrine of stare decisis does not apply in arbitration, and secondly, there are guidelines,” she countered, holding up a slim document. “The Chartered Institute of Arbitrators Guidelines on the use of AI, 2025. It emphasises transparency, accountability, and human oversight. Solomon fails all three.”
She continued, “This isn’t just about software. It’s about trust. Arbitration rests on party autonomy and procedural fairness. When the arbitrator—human or artificial is compromised, then the validity of the award is questionable.”
The matter was now brought in front of the internal review committee of the International Institute of Arbitration. After multiple deliberations, quietly but through a seismic decision, it suspended Solomon’s participation and passed an order that a human tribunal would conduct the further proceedings of the case.
The ripples were immediate. The entire arbitral community exploded with fierce debate. Some referred to Mira as reckless, while some applauded her wit. But the pertinent questions she had brought to the limelight refused to go away.
1. Should AI act as an arbitrator or merely assist a human arbitral tribunal?
2. As human reason plays a vital role in applying and interpreting law, can machines be entrusted with this discretion in complex arbitration cases?
3. Who is liable when an AI arbitrator gives a flawed or manipulated decision- developer, arbitral institution, or no one?
4. Can an award rendered by an AI be enforceable under municipal laws, or will it be considered violative of public policy?
5. Should international instruments like the Model Law on ICA, 1985, or the New York Convention be amended to account for non-human decision makers?
6. What audit mechanisms, regulations or checks must be adopted before AI use is allowed in the arbitral hearing rooms, so that justice is not defeated?
As Mira was returning from the arbitral institute, the realisation dawned upon her that this case had exposed more than a single failure. A dangerous truth is out in the open- the arbitral community was outsourcing justice to a system about which we didn’t fully understand.
The law had long trailed in technology’s wake; it’s time to catch and turn the tables. Before this tale of technology becomes reality, the law must lay clear provisions, lines and define the rules of the game to avoid future pitfalls and uncertainties.
Disclaimer :-The opinions expressed here are solely those of the author and do not represent the views or positions of the institution.