Skip to main content

JAMS ADR Insights

Alternative Dispute Resolution Artificial Intelligence

Top Five Things Lawyers and Neutrals Should Know About Using Artificial Intelligence

Artificial intelligence, or AI, was the buzzword of 2023. AI has affected almost every aspect of our world. Although the legal profession has used “traditional AI” for many years, the use of generative AI is new and is evolving.

“Traditional AI” focuses on performing a specific task intelligently. It refers to systems designed to respond to a particular set of inputs, which are capable of learning from data and making decisions or predictions based on that data.”[2] “Examples of traditional AIs are voice assistants like Siri or Alexa, recommendation engines on Netflix or Amazon, or Google's search algorithm.”[3] “In contrast, ‘generative AI’ creates something new from information it receives. Generative AI can not only create text outputs, but can also create images, music, and computer code.”[4] 

Currently, there are three areas where AI may be useful for lawyers and neutrals: (1) administrative (e.g., billing, marketing); (2) procedural (rules for AI use and AI disputes); and (3) practice (document summarization and translation).[5] AI can improve efficiency in law practice and in dispute resolution,[6] but lawyers and neutrals must understand its strengths and weaknesses.

Here are five concepts to keep in mind about AI when using it in law practice or dispute resolution:

  1. Capabilities and limitations: AI is a tool to aid, not replace, human judgment. It’s crucial to understand what AI can and cannot do, recognizing its potential to enhance efficiency in tasks like document review and legal research while knowing its limitations, such as the inability to grasp the nuances of legal reasoning.[7] Also, as discussed below, AI is subject to bias, and its capabilities depend on the data sets on which it is trained.
    AI is improving and hallucinating[8] Legal research is moving from non-specialized AI to AI trained on legal materials, designed to tackle specific, complex legal problems. Of course, a senior lawyer will have to review it—just as they would review any junior lawyer’s work.[9]
  1. Ethical and legal compliance requirements: Lawyers’ AI use must comply with ethical standards[10] and legal regulations. AI use also must adhere to professional conduct rules and client confidentiality obligations. Lawyers should be familiar with regulations governing data privacy and protection.[11] Federal contractors are subject to Federal Acquisition Regulations.[12] Client data or personally identifiable information should not be used on any public AI platform, and client data should be scrubbed before being used on private AI platforms.
  1. Bias and fairness: Biases in AI training data may lead to unfair or discriminatory outcomes. Users must consider potential bias in AI tools and work to mitigate bias to ensure fairness and impartiality in AI-assisted legal research or decisions.[13]
  1. Transparency and explainability: The decisions and processes of AI systems should be transparent and explainable, especially when they impact clients' rights or case outcomes.[14] Lawyers and neutrals must understand and be able to explain how AI tools arrive at certain conclusions or recommendations. Global AI standards often group transparency and explainability together because they are interconnected terms. Transparency answers the question of what happened in the AI system, while explainability addresses how a decision was made using AI.
  1. Continuous learning and adaptation: The field of AI is rapidly evolving, and lawyers must stay informed about the latest developments and best practices. This includes ongoing education about new AI tools and technologies, as well as adapting to changes in the legal landscape influenced by AI.[15]
    Lawyers and neutrals should take a thoughtful and informed approach to the use of AI in legal practice or dispute resolution, balancing technology’s benefits with their professional responsibilities.
blog post image

[1] The author used Bing’s AI tool to assist in drafting this article.

[2] Bernard Marr, “The Difference Between Generative AI and Traditional AI: An Easy Explanation for Anyone,” Forbes, July 24, 2023.

[3] Id.

[4] Id.

[5] Christopher Poole, “The Use of AI in ADR: Balancing Potential and Pitfalls,” JAMS Blog, Jan. 31, 2024.

[6] Ryan Abbott and Brinson Elliott, “Putting the Artificial Intelligence in Alternative Dispute Resolution: How AI Rules Will Become ADR Rules,” Amicus Curiae, Series 2, Vol. 4, No. 3, 685-706, 2023.

[7] Brenda Leong and Patrick Hall, “5 things lawyers should know about artificial intelligence,” ABA Journal, Dec. 14, 2021.

[8] Jeff Neal, “The legal profession in 2024: AI,” Harvard Law Today, Feb. 14, 2024.

[9] Id.

[10] E.g., ethical rules regarding competence, confidentiality, communication and supervision.

[11] E.g., EU Artificial Intelligence Act; White House Executive Order, Oct. 30, 2023; state data privacy laws. General Data Protection Regulation (GDPR).

[12] E.g., https://www.acquisition.gov/far/52.204-21.

[13] Brian Uzzi, “A Simple Tactic That Could Help Reduce Bias in AI,” Harvard Bus. Rev., Nov. 4, 2020.

[14] Arsen Kourinian, “Addressing Transparency & Explainability When Using AI Under Global Standards,” Bloomberg Law, 2024.

[15]Jeff Neal, “The legal profession in 2024: AI,” Harvard Law Today, Feb. 14, 2024.


Disclaimer:
This page is for general information purposes. JAMS makes no representations or warranties regarding its accuracy or completeness. Interested persons should conduct their own research regarding information on this website before deciding to use JAMS, including investigation and research of JAMS neutrals. See More

Scroll to top