It has been quite a challenge for all of us in the legal and alternative dispute resolution (ADR) fields to sort through the hype and potential of AI. Many lawyers and law firms dove in headfirst, excited about the potential to improve their briefs and speed legal research, relying primary on products like ChatGPT from OpenAI, Copilot from Microsoft (based on the same technology) and Bard from Google. As these platforms vacuumed up vast amounts of data from the internet, they became increasingly impressive—and increasingly risky. We have all heard about lawyers using chat-based tools to augment their writings and including “hallucinations” such as made-up case citations.
As those risks became more apparent and frequent, courts, agencies and legislatures at the federal and state levels in the U.S. and Europe started drafting and enacting policies and regulations aimed at maintaining privacy, ensuring proper disclosure of the use of AI, protecting copyrighted works and data and guarding against the use of AI for nefarious purposes.
While these efforts continue, legal organizations are racing to implement AI applications, sometimes based on thoughtful use cases and other times based on a try-it-and-see approach. Both Lexis (Lexis+) and Westlaw (Westlaw Precision) have begun to roll out AI-assisted products to aid in legal research and document production.
In the ADR world, there are essentially three categories of AI use cases: administrative, procedural and practice-related. Administrative use cases abound behind the scenes, from the development of marketing materials (both words and images) to the automation of billing and financial management processes. Procedural developments will initially focus on rules for arbitrating AI disputes as well as policies and guidelines detailing the acceptable and non-acceptable uses of AI at ADR institutions. Most importantly, AI use cases related to improving client service and ADR outcomes have the potential to completely change the practice of ADR.
Beyond chatbots and other natural language AI use cases, products will emerge that take advantage of pattern recognition to help neutrals to identify next steps (offers and counteroffers in mediation), case strengths and weaknesses, and even likely outcomes. Again, these types of applications rely on significant, relevant and accurate historical data in order to be useful. Care must also be taken to recognize potential bias in the underlying data models.
One early application of AI is the “smart court” in China, which uses AI instead of or in addition to human judges to adjudicate certain cases. AI systems such as Xiaofa, Xiao Zhi and System 206 use machine learning to offer advice to parties and can conduct hearings and make decisions based on case law. It remains to be seen if these models will be used to assist human judges or replace them, and ethical and fairness issues abound.
At JAMS, we have been taking a measured approach to identifying and developing AI use cases. AI is already being deployed behind the scenes in human resources and marketing, and other applications are in development. We feel it is important to be innovative with all deliberate haste. Each potential use case goes through a legal review, and risks are identified and addressed early in the process. The use of public large language models (LLMs) can be beneficial in some circumstances, but we can expect to see an emphasis on private models to eliminate risk.
For example, AI can be useful in summarizing and analyzing documents, but it is dangerous to take too much of our human expertise out of the process. Legal research has a role, but avoiding erroneous information and misleading content is paramount. We believe that our efforts should be based primarily on improving the client experience and not deploying new technologies for technology’s sake.
JAMS is known for helping clients with the most complex cases, but our panelists also assist clients with lower-value disputes. In those cases, there is more potential for AI-assisted ODR (online dispute resolution), where clients can use online tools to resolve their disputes. Several very promising products have already emerged, although many others are not yet ready for prime time.
ADR practitioners and clients can expect a lot from institutions that take the right approach to rolling out AI. By avoiding the hype and focusing on the goal, we will see huge improvements in how ADR services are delivered in the future.
This page is for general information purposes. JAMS makes no representations or warranties regarding its accuracy or completeness. Interested persons should conduct their own research regarding information on this website before deciding to use JAMS, including investigation and research of JAMS neutrals. See More