JAMS ADR Insights
BROWSE TOPICS
Who Picks the Arbitrator — You or the Algorithm?
(This post was co-authored by Giuseppe De Palo and Annie Lespérance, Head of Americas at Jus Mundi.)
When artificial intelligence enters the process of arbitrator selection, the question it raises is often framed as technical. In reality, it is structural.
At stake is not simply how arbitrators are identified or ranked, but how arbitral authority itself is constituted — and whether the mechanisms that give arbitration its legitimacy can withstand algorithmic influence.
To explore that question, JAMS and Jus Mundi convened a joint program that departed deliberately from the standard panel format. Rather than exchanging views in the abstract, the organizers structured the program as a live hearing, complete with a tribunal, assigned counsel and formal argument on both sides of the issue.
The format was intentional. Arbitrator selection is not an administrative step. It is the moment where party autonomy, neutrality, diversity, enforceability and institutional trust converge. Decisions made at this stage shape not only the composition of a tribunal, but the perceived legitimacy of the arbitral process itself.
The purpose of the hearing was not to reach a verdict on artificial intelligence. It was to identify the precise point at which AI assistance risks becoming AI delegation — and to examine why that distinction matters more than any specific technology currently in use.
From that starting point, the debate unfolded. And what it revealed was less about technological capability than about governance design.
The Debate Was Not About Capability
Arguing in favor of AI-assisted selection was Robert Mahari, Associate Director of the Stanford CodeX Center. His position was not that algorithms should replace human decision-makers, but that properly designed systems can introduce measurable discipline into a process long shaped by informal networks.
No one disputed that AI tools can already rank arbitrators by prior appointments, jurisdictional expertise, language, conflicts and even demographic indicators intended to advance diversity. The data sets exist. The algorithms function.
The question, Mahari argued, is whether those systems can be audited, validated and measured against clear benchmarks. Without transparency and empirical testing, AI risks encoding historical appointment patterns and presenting them as neutral outputs.
If AI-assisted selection claims to improve diversity or reduce repeat-player concentration, those improvements must be demonstrable. Otherwise, automation may simply entrench existing structures under a veneer of objectivity.
The concern is not technological capacity. It is accountability.
The Consent Problem
Responding from the perspective of counsel and party autonomy was Maria Lucia Echandia, Senior Associate at Hogan Lovells.
Her focus was foundational: consent is not incidental to arbitration — it is constitutive. Parties agree to arbitration, and to specific decision-makers, because they choose them.
AI, properly designed, may enhance informed consent by organizing and expanding access to relevant information. More comprehensive data can support better decision-making.
The tension arises when assistance shifts toward correction.
Parties frequently value attributes that resist quantification: familiarity with a legal culture, confidence in procedural style, deep industry knowledge developed over decades. These are not necessarily biases to be corrected. They are expressions of informed judgment.
If an algorithm is designed to override or “rebalance” such preferences in pursuit of structural reform, at what point does assistance become constraint? When does the tool begin selecting for institutional objectives rather than party intent?
That boundary is not merely technical. It goes to the core of arbitral legitimacy.
When Something Goes Wrong
Addressing the issue from the standpoint of enforceability and institutional accountability was Enrique Molina, Senior Associate at White & Case.
Human selection is imperfect. Repeat-appointment networks and demographic concentration are well-documented realities. But institutional reform — expanded rosters, transparent appointment statistics, rotational systems and disclosure requirements — can address those concerns without relocating authority to an algorithm.
International arbitration operates within a defined enforcement framework. Awards are subject to scrutiny under the New York Convention. Appointments can be challenged. The system depends on identifiable, accountable decision-makers.
When an algorithm substantially shapes an appointment later subject to challenge, the question becomes unavoidable: who answers for it?
Efficiency gains cannot compensate for a diffusion of responsibility.
What Data Cannot Measure
Adding the perspective of a sitting arbitrator was Sarah Reynolds, FCIArb, Partner & Head of International & Domestic Arbitration at Kaplan & Grady.
Metrics can capture efficiency, procedural timelines, conflict disclosures and publication history. They struggle with temperament, deliberative courage, the ability to command trust in a divided tribunal or to ask the question that reshapes a hearing.
These qualities are recognizable to experienced practitioners. But they are rarely legible to data sets trained primarily on published awards and appointment histories.
An algorithm optimized for measurable outputs may inadvertently disadvantage precisely the characteristics that define exceptional arbitrators.
The challenge is determining where quantification clarifies, and where it distorts.
The Tribunal’s Intervention: What Meaningful Supervision Requires
As the hearing progressed, the tribunal reframed the issue. The question was no longer whether AI should assist, but what meaningful human supervision of that assistance demands.
“Human-in-the-loop” can range from rigorous independent evaluation to little more than rubber-stamping algorithmic output. The danger lies in the latter — where cognitive deference to data-driven authority produces automation bias while preserving only nominal human responsibility.
Meaningful supervision requires structural safeguards:
- A defined and published standard of review for algorithmic outputs
- The authority to add candidates beyond an AI-generated list
- A documented explanation when following or departing from rankings
- Auditability sufficient to withstand scrutiny in the event of a challenge
Without these elements, supervision becomes ceremonial, offering the appearance of oversight without its substance.
The Mandate Question
The tribunal pressed the debate further. If AI-assisted selection were empirically shown to improve neutrality and diversity, should its use remain optional? Would discretion perpetuate the very biases reform seeks to address?
This is the strongest version of the structural reform argument.
The response remained cautious. Consent defines arbitration. Mandating algorithmic involvement without explicit agreement in arbitration clauses risks exceeding what parties bargained for. Moreover, empirical validation remains incomplete. Before compulsion can be justified, improvement must be demonstrated.
Gradual institutionalization — pilot programs, transparent reporting and structured experimentation — reflects how arbitration historically evolves: through practice, evidence and precedent.
Human Prevalence as Design Principle
The tribunal ultimately concluded that final appointment authority must remain human. But that conclusion carries design implications.
Human prevalence requires more than symbolic approval of algorithmic recommendations. It demands that final authority rest with a named decision-maker capable of independent judgment and genuine override. It requires reason-giving obligations and institutional audit mechanisms that can survive scrutiny under the New York Convention.
AI may surface overlooked candidates. It may flag conflicts more efficiently. It may disrupt patterns of comfortable repetition. These are meaningful contributions.
But authority cannot migrate informally to the algorithm through gradual deference. Legitimacy in arbitration rests on accountable human judgment.
What the Hearing Clarified
The hearing demonstrated that the debate over AI in arbitrator selection is not a contest between innovation and tradition. It is a question of institutional design.
AI-assisted selection will likely become part of the arbitral landscape. The system will be hybrid. But the distribution of authority between algorithm and human judgment remains under construction.
In arbitration, legitimacy precedes optimization. Technology can strengthen process integrity when embedded within transparent and accountable structures. Without them, it risks eroding the trust that underpins enforceability and party confidence.
The critical distinction is not between using AI and rejecting it. It is between assistance and delegation — and the institutional choices that determine where that line is drawn.
Giuseppe De Palo serves as a JAMS mediator, arbitrator and neutral evaluator, handling bankruptcy, business/commercial, employment, financial markets, intellectual property, international and cross-border, personal injury/torts, professional liability and telecommunications cases.
Annie Lespérance is Head of Americas at Jus Mundi, a legal tech company at the forefront of Artificial Intelligence and international law and arbitration, with responsibility for leading the company’s business strategy in the Americas. She also acts as arbitrator and mediator and is a fellow of the Chartered Institute of Arbitrators (FCIArb). She is a member of the Silicon Valley Arbitration and Mediation Center’s AI Task Force.
Disclaimer: The content is intended for general informational purposes only and should not be construed as legal advice. If you require legal or professional advice, please contact an attorney.
Disclaimer:
This page is for general information purposes. JAMS makes no representations or warranties regarding its accuracy or completeness. Interested persons should conduct their own research regarding information on this website before deciding to use JAMS, including investigation and research of JAMS neutrals. See More