Skip to main content

JAMS ADR Insights

Alternative Dispute Resolution Artificial Intelligence Intellectual Property Technology

Understanding the Impact of AI on the ADR Process

Artificial intelligence (AI) is increasingly influencing how alternative dispute resolution (ADR) processes are designed and conducted. Its impact varies depending on a technology’s function, purpose, and the degree of human oversight involved. According to JAMS neutral Ryan Abbott, M.D., Esq., FCIArb, and co-author Brinson S. Elliott in this article for ALM, understanding this evolving landscape is essential for practitioners, parties, and the neutrals who facilitate resolution.

AI in ADR: A Broad and Growing Spectrum

AI technologies used in ADR (AIDR) range from assistive tools that support neutrals with administrative tasks to advanced systems that recommend—or even make decisions in—simple disputes. These tools can streamline high-volume case management, reduce costs, and enhance informed decision-making. In doing so, they reinforce ADR’s foundational goals: fairness, efficiency, and cost-effectiveness.

Some platforms also offer outcome forecasting to self-represented litigants or autonomously resolve low-stakes disputes, allowing human neutrals to focus on more nuanced and complex matters.

Risks and Limitations of Automation in Dispute Resolution

As AIDR tools become more sophisticated, so do the risks they present. While automation in tasks such as legal research, negotiation, and document review may boost productivity, complex disputes often involve emotional dynamics, ambiguous language, and credibility assessments—elements AI systems cannot reliably interpret or evaluate.

Furthermore, machine-learning models are only as effective as the data they are trained on. Legal systems that vary by jurisdiction and involve overlapping legal domains present significant challenges for AI development and deployment. So-called “black box” systems, where the reasoning behind outcomes is hidden, raise serious concerns about due process, transparency, and fairness.

The European Union’s Approach to Regulating AI in ADR

The European Union Artificial Intelligence Act, expected to be formally adopted in 2024, classifies AI tools used in judicial and ADR contexts as high risk. Such systems will be required to meet rigorous standards for data quality, documentation, human oversight, and transparency.

Complementing this regulatory approach, the European Commission for the Efficiency of Justice (CEPEJ) recently issued guidelines on online dispute resolution (ODR). These guidelines emphasize responsible deployment, adherence to technical safety standards, and ensuring full participation of the parties—principles that closely align with traditional ADR values.

Accountability, Transparency, and the Future of AIDR

Emerging regulations suggest that AI systems may soon be held to higher standards of transparency and accountability than human neutrals. Whereas human biases can be difficult to detect or prove, AI tools can be statistically tested, adjusted, and explained. This opens new possibilities for improving procedural fairness and reducing systemic bias in dispute resolution processes.

Under the EU AI Act, entities deploying high-risk AI systems must perform impact assessments, maintain transparency in decision-making, and ensure users have access to complaints and redress mechanisms. These responsibilities may ultimately extend to human neutrals who use flawed or biased technologies, underscoring the importance of careful system oversight and responsible usage.

Full Article Below:

Open in new window

The American Lawyer


Disclaimer:
This page is for general information purposes. JAMS makes no representations or warranties regarding its accuracy or completeness. Interested persons should conduct their own research regarding information on this website before deciding to use JAMS, including investigation and research of JAMS neutrals. See More

Scroll to top