[PODCAST] Staying Ahead of the Curve With AI-Related Disputes: Tailored Approaches in a Rapidly Evolving Legal Environment
In this podcast, JAMS neutrals Hon. Jackson Lucky (Ret.); Dr. Ryan Abbott, M.D., Esq., FCIArb.; and Daniel B. Garrie, Esq., discuss how the future of legal disputes is evolving in response to advancing artificial intelligence technology. Throughout their conversation, the neutrals explore the complexities of AI-related disputes, emphasizing the advantages of ADR over traditional litigation due to its speed, cost efficiency and ability to vet ADR professionals to determine which individual may have the background, experience and technical knowledge that will align best with a case's needs. The group also discusses what prompted the creation of the JAMS AI rules and how the complexity and novelty of AI-related disputes necessitated a set of uniquely tailored rules that address the operational and technical realities of AI. From there, the neutrals elaborate on the need for the specialized process for technical expert reviews, as outlined in the JAMS AI rules, as well as discuss the value of proactively including an AI arbitration clause into a contract.
[00:00:02] Moderator: Welcome to this podcast from JAMS. In this episode, we're going to be talking about AI. This fall, JAMS announced the launch of JAMS Next, an initiative aimed at leveraging AI-powered tools to streamline the dispute resolution process. Earlier, the organization also announced new rules governing disputes involving AI. With us are two JAMS neutrals who helped write those rules, Daniel Garrie and Ryan Abbott. We also have Judge Jackson Lucky, a passionate techie who has spent 13 years on the bench in Riverside County, California.
Ryan, I'll start with you. Obviously, AI involves a really complex ecosystem, multiple stakeholders. How do those many layers impact the potential for disputes?
[00:00:56] Dr. Ryan Abbott: Yeah, great question. And thanks for asking it. You know, I think what we're seeing with modern AI systems, and the ones that are being increasingly used in corporate settings, is these systems have gotten really very complex, and a lot of stakeholders are involved. So, you may have one group of people kind of building software, other groups of people selecting training data for it, other people applying the training data to train models.
You have different types of integrations of training models into different sorts of systems and multiple systems integrators. You have different people using AI in different sorts of ways, iterating with the results, validating the results and licensing platforms. And as all of this gets done, you know, it gets more and more challenging, sometimes as a factual matter, to figure out when something bad happens. You know, what happened? What went wrong? How do we fix it? You know, where in this chain are we having issues? Who's responsible for it? How are we going to kind of resolve this as quickly and cheaply and fairly as we can. And so, you know, as systems get more complex, you get more complex sorts of disputes and sorts of disputes that really have problems for traditional litigation, which has kind of very high discovery burdens and costs.
[00:02:18] Moderator: Judge Lucky, you know, Ryan mentioned sort of speed, cost—those have traditionally been the advantages of ADR over sort of traditional litigation. What is it about ADR that's a better fit for AI-related disputes compared to traditional litigation? Is it those same factors?
[00:02:39] Judge Jackson Lucky: Thanks, Andrew. Absolutely. It's the same factors, but I would say that there are three additional factors that we should consider when it comes to how alternative dispute resolution may be a better fit for AI disputes when we look at subject matter expertise. You know, having been on the bench myself, the judges generally, in most courts, get cases assigned on a wheel or on a random basis, right? So, it's either assigned numerically or just who is the next up. And so, you have this roulette wheel of bench officers who may or may not be familiar with subject matter.
When you choose an ADR professional, generally speaking, you're getting a list of people. You can vet those people. You can stipulate to somebody who has the exact experience you want. If you want somebody like Daniel, who has graduate degrees in computer science, works in cybersecurity, is well versed on every type of technology when it comes to computer-related and IP-related disputes, you got that person. If you want Ryan Abbott, who has been on the cutting edge of IP litigation when it comes to generative AI, whether it's in the patent realm or teaching and lecturing on other types of intellectual property, like copyright, with AI, you have that kind of expertise. If you want a judicial officer who is involved in the tech world, then we have a judicial officer in JAMS or possibly other providers who have that type of background.
So, you have that bit of subject matter expertise—this ability to choose the decision-maker—that is a huge advantage. The second thing that I think is a huge advantage—and I believe that Ryan may have touched on this—is confidentiality. When you're dealing with trade secrets, sure, you can have protective orders, but courts have constraints that arbitrations do not. And so, when we look at what's going to happen, we have to take extra steps, and the court has a lot of constraints on what it can keep confidential. Constitutionally, courts have to do things in the light. ADR has the advantage—since we're creatures of contract when it comes to arbitration—of doing things that are maybe a little more opaque, that are better suited towards shielding the secrets that you might want to keep when it comes to your IP, when it comes to your generative models, when it comes to the data sets that you use for your AI. And then the third thing is—is that we have flexibility. I think Ryan and Daniel and the rest of the people involved in making the JAMS AI rules have done an incredible job. But if for some reason, you find that you need a modification of those rules—you need to tweak them a little bit, you want to change a definition—well, you have that flexibility to do that in an ADR context. You don't have that flexibility when it comes to the Federal Rules of Civil Procedure. And so, being able to fashion something that has the neutral, that has the background that you want, that has enhanced privacy and confidentiality, and has that flexibility to move the pieces that you need to on the board to get to the playing field that you want to create, I think are great advantages under an ADR model and especially with our AI rules.
[00:06:33] Moderator: Daniel, Judge Lucky mentioned your background, making you well suited to take on these disputes. Can you talk a little bit about the challenges of resolving disputes that involve these large data sets used to train AI models and how ADR might mitigate those?
[00:06:50] Daniel Garrie: Yeah, certainly. So, I'm honored to be in such esteemed company as my colleagues, Judge Lucky and Ryan, or Dr. Abbott. Yeah, I mean, it's a function of the reality that when a lot of AI disputes come into play, most people aren't going to want to turn over to a third party all of their suit—you know, their trade secrets, their most sensitive data information. And even if they were willing to do it, there's vast amounts of data and a complex set of systems that need to be stood up to interpret and understand the data.
And so, by using an ADR system, you can sort of fashion things that are outside of the norm of how discovery—usually transaction litigation—and you can come up with sort of unique and novel ways to work around some of the issues that are unique and present themselves unique to AI, like the weighting scores and LLM models, and giving visibility in a way that's certainly a lot more efficient and timely than you would see in litigation. And to also Judge Lucky's point, it's also possible to take the rules that we've proposed or that are that JAMS has in the marketplace and modify them so that they work appropriately for your AI platform and being able to make sure that the right people know what an LLM is, know what weighting is, know what scoring is, understand the implications and help you ensure that your intellectual property won't be in the wild, so to speak, and it won't result in some ungodly exorbitantly expensive discovery fight, just so you can, you know, figure out whether there's merits to the case or not.
You know, the only thing I could say is Judge Lucky has been, you know, overly gracious, as he always is. I didn't even know there was another ADR provider other than JAMS.
[00:08:50] Moderator: Ryan, can you talk us through about what prompted the creation of the JAMS Artificial Intelligence Disputes Clause, Rules and Protective Order, and how these rules specifically are tailored for AI disputes.
[00:09:05] Dr. Ryan Abbott: Sure. Well, well, the three of us are kind of out in the world dealing with AI-related disputes since before there were AI-related disputes, since they were just complex disputes involving, you know, various hardware and software systems, and kind of seeing what was going on in the marketplace and reacting to some unfelt needs. And generally, I think, as Judge lucky talked about, you know, a lot of what we brought to bear with these rules were kind of traditional tensions or differences between ADR and litigation, like having a subject matter expert and, you know, whether you want a generalist or a subject matter expert, which is a long-standing kind of challenge. And there are other areas where this, you know, comes up a lot—patent disputes for example. Some judges have a lot more experience in patents than others, or very complex technologies, like nuclear technologies, where you know, there are wonderful generalists and you may get just the right outcome with a generalist, but if you—you know, in a best-case scenario, you're probably spending a lot of time getting that person up to speed on a technology they're not familiar with. And in a bad-case scenario, you may get someone who kind of fundamentally doesn't understand the underlying technology, and if that is central to the substance of resolution of a factual issue, you may get a bad outcome.
So, you know, this was a concern also that we saw in the AI community, given how complex these systems were and the issues arising from them were, and we put our heads together and realized that, you know, some parties would be interested in the set of rules where the default is that you have someone who JAMS has pre-vetted as a subject matter expert—you know, because it is great too when you have, you know, civil opposing counsel you're working with and can do your own rules—but if the parties aren't agreeing [on] anything because they're in a contentious space, you know, it is good to have these defaults. You know, similarly with confidentiality, you know, AI platforms are really zealously guarding kind of their commercial trade secrets, you know, how their models are trained, you know, the weights, the training sets that are used. And sometimes that might be directly relevant to the resolution of an issue, but often it is tangential, and yet there are, you know, incentives in, you know, litigation these days to have scorched earth discovery campaigns and risk that third parties are getting information they really shouldn't, risk that things are getting out in court really that aren't relevant to a dispute, and, as Judge Lucky mentioned, you know, there are constitutional, you know, rights to have some of this information in open court, even though it isn't really that relevant to, you know, the underlying merits of a particular case.
And so, you know, particularly platform developers wanted to have a system where they could better safeguard things that they have legitimate interests in keeping confidential. And so, you know, the rules have these built-in protections for that and kind of specialized expert systems also to look into there, so it was, you know, hearing from people in the marketplace, how could ADR do a better job with this sort of dispute, and then putting together rules that we felt accommodated that.
[00:12:11] Daniel Garrie: The only thing I wanted to add to that was that there's a disconnect between all of the hype you read about AI and the reality of how AI is delivered in the marketplace and the systems and the complexity and the capital and time and investments and the like. And I think that the goal of our rules is sort of to help address the technical and operational realities of what we're seeing in today's business environment as AI is, you know, further entering and driving into the marketplace today.
[00:12:47] Moderator: Judge Lucky, I don't know if you wanted to expand on your point, but—you know, the appointment of panelists with specific technical knowledge and how they can improve the ADR process in AI-related disputes. You've obviously seen, potentially, the cost when parties don't have that expertise and sort of roll the dice in court. Wondered if you could expand on that?
[00:13:11] Judge Jackson Lucky: Absolutely. And I think, you know, we've touched on this a little bit—all three of us—in that well, if you want somebody with knowledge of a particular domain because that's really the critical issue in your dispute, then you can do that. If you want somebody with a particular kind of litigation or arbitration experience—not maybe the domain, but the legal issues involved—you can do that. And so, that's kind of the obvious answer, but I think that there are some other considerations that we can have when it comes to getting people with special technical knowledge. So, for instance, I've been reading some of the commentary about the JAMS AI rules, and one of the things that I've seen is an examination of the word “cognition,” because that's the [word] that we use in our rules to distinguish AI from other technologies. And there's been some commentary that [the definition of] “cognition” may be too broad and it may lead to broader definitions than the rules intended.
So, here's a situation where having somebody with specialized knowledge, right—we have what some people feel is an ambiguity in the rule itself, which I think stems from the ambiguity of the term itself, right? It’s hard to define a nascent technology where people are coming at it from different angles, people are trying to name things. And those of us who are computer nerds know that that's one of the toughest things in computer science—is to name things, and it's one of the things that we struggle with in the law. So, having a domain expert or subject matter expert when it comes to this type of ambiguity: How are we going to scope discovery? What falls within AI, or what arguably meets the definition of “cognition,” but what we really don't think of as AI in a focused sense, right? I think that having people with specialized knowledge who have thought about these issues in advance, who have maybe the technical know-how, maybe the legal know-how, maybe a combination of both, can add to that efficiency, right? Because the definitions that we use, the interpretations that we give to these rules—you're not going to find those in case law. You're not going to get a published decision that interprets a JAMS rule’s language and what the scope of that is—at least not in most cases. And so, having somebody that you can trust to look at what is the definition of AI and not be over broad, to look at what does “cognition” mean in this domain, to look at the difference between what is just a computer and what is actually something that is taking over a task that human beings would normally have to think about and do. I think these are some of the other areas where that specialized knowledge is going to come into play, that nuance of understanding the domain-specific language that technologists are using.
[00:16:48] Moderator: Daniel, can you talk to us about how you thought about the built-in confidentiality and protections in the AI rules? How did you craft those, what was the thought process and how do you think it will benefit companies involved in these AI disputes?
[00:17:04] Daniel Garrie: Well, at a practical level, right, the complexity of the systems and the need to protect the intellectual property while also ensuring an efficient and effective outcome in a legal proceeding needs—you need to weigh them together. But I think in the particular reality of AI-driven disputes—I think to Judge Lucky's prior points that this is an emerging area of law, right? There isn't a lot of case law, there's not a lot of opinions, and there's certainly not going to be a lot of sharing of intellectual property and data points and other things. And so, what both me and Dr. Abbott realized right away was that there's a need to ensure and promote an efficiency, and to do that, to make the parties comfortable, there needs to be this need to ensure that the intellectual property is properly protected from the get-go.
[00:17:59] Moderator: Ryan, talk to us about the specialized process for technical expert reviews, which I know is important, as outlined in the AI rules, and how does that help streamline dispute resolution?
[00:18:12] Dr. Ryan Abbott: No, and that's something I'm particularly happy with. You know, you often have a battle of experts in dispute resolution, and this is an optional pathway for the parties to have an arbitrator, select a technical expert and pose questions [to] the expert directly from the arbitrator. And so, what that can help ensure is that you are getting, again, a third-party, vetted expert who knows what they're doing and who is going to be kind of limited to issues that the arbitrator, him- or herself, or the panel thinks will be relevant to the dispute.
So, you are avoiding kind of very costly, duplicative, unnecessary fishing expeditions and, you know, making sure that you're sticking to something that's relevant in the case. And you're also reducing the risk for someone who's sharing kind of the secret sauce at a company, you know, going to a third-party expert who might have links to a competitor or who might be less than careful with information. And so, you know, this takes kind of one of the bottlenecks of traditional litigation and makes it far less confrontational, and it really helps narrow the issues and the technical challenges associated with the case. And that, at the end, makes everything move faster and cost less money and get hopefully to a fair, more neutral resolution.
Daniel, anything to add to that?
[00:19:33] Daniel Garrie: I was a big proponent of this. I think it allows a higher degree of comfort if the parties are, you know, practitioners in the field and the area. You know, both sides will likely produce their own experts, so I think having this sort of other, alternative kind of mechanism with—will increase the efficiency and effectiveness of the arbitrating arbitration proceedings.
[00:20:06] Moderator: Judge Lucky, [during] your time on the bench, you no doubt saw some, a lot of, contract disputes and probably some—some bad, bad drafting of contracts. How important do you think it is for companies to get ahead of this, to proactively include dispute resolution clauses in their agreements, AI agreements, to prepare for future conflicts?
[00:20:30] Judge Jackson Lucky: Great question, and I think it's paramount. Daniel and Ryan and I have talked about these kinds of topics, and we've noted the irony that by the time the United States Supreme Court decided Google v. Oracle, or Oracle v. Google, that dispute was basically moot because the APIs that were at issue in that case, Google had already moved on to Kotlin, a different platform. And so, the law takes a long time to catch up with technology. And I would say that technology accelerates much faster than the law can, right? And I think people overuse the word “exponential,” but I think when it comes to the advancement of certain technologies, we are seeing exponential growth. And I think it's unlikely that legislatures or regulatory agencies are going to be as agile as technologists are going to be.
So, how do we fix that problem? Well, we can write our own rules. And so, putting the ar—putting the AI arbitration clause into the contract before you get into a problem, and having some foresight and modifying those if necessary—do that because we do that with discovery; we do that with—when an award is due. You can do that with the JAMS AI rules as well. Write in the bespoke measures that you need to make sure that when you have a dispute in the future, you have done your best to anticipate the needs of that dispute and resolving it fairly, as opposed to taking your chances that the law might or might not catch up with what you're going to need. And I'd love to hear what—what Daniel and Ryan think about this as well.
[00:22:34] Dr. Ryan Abbott: I think those are great points. The AI rules, you know—to the extent that parties decide these would be useful—[parties] should write them into the rules, although they also apply by default for, you know, AI-related substantive disputes at JAMS. But you know, as Judge Lucky mentioned with Oracle v. Google, that took a decade. By the time it was resolved, the law has moved on. So, many of the issues that are going to come out of AI use with disputes are new to courts. It is going to take years and years to make their way through courts and to get, you know, binding kind of new case law and guidance from appellate courts and the Supreme Court on how to resolve them. And very few parties to a dispute resolution are coming into it thinking, We really—you know—care about the case law that's being made. We're prepared to wait 10 years and spend millions and millions and millions of dollars on appeal.
When people have a dispute and they want it fairly quickly and cost-effectively resolved. And if you don't want to have your, you know, cases tossed into this great uncertain litigation future, you know, having streamlined rules designed to deal with AI-related disputes, I think, are extremely important for commercial contracts. And, you know, [it’s] fairly straightforward to put in a dispute resolution clause that will, you know, address many of the challenges that will happen when inevitably some of these deals fall apart, where problems emerge from platforms.
[00:24:04] Daniel Garrie: I largely agree with my colleagues' points. I think that just at a practical level, if you think of where we were at 12 months ago in AI and where we're at today, that, in itself, is demonstrative of the problems and issues, right? Do you really want to invest a whole lot of time? And if you think about a practical level, how are you going to preserve an AI system if the cost to build these systems is so high and the rate these systems turn and everything else if you can't quickly get resolution from a legal perspective, let's say, around a contract dispute on an AI system, right? By the time you're in court and it's actually being heard and resolved, I mean, I don't know, but I think were we at GPT-3.5, 18 months ago, Ryan, I mean, or Judge Lucky? I don't remember. But just think where we're at today, with 4.5 or 4.0 or anthropic or any of these. So, I mean—and these systems aren't cheap—and are you going to just stand up and set aside the one system while you go to litigation? I mean, there's a practicality of it, that the need to use arbitration to resolve these disputes simply is able to meet and solve that the courts at this point aren't in a position to resolve.
[00:25:17] Moderator: Well, all right, gentlemen, we'll leave it there. This feels like a conversation that has only just begun, but Ryan, Daniel, Judge Lucky, thank you so much. We really appreciate your insights.
[00:25:30] Judge Jackson Lucky: Thank you so much for having us. We appreciate you.
[00:25:33] Dr. Ryan Abbott: Thank you, Andrew. That was great.
[00:25:35] Daniel Garrie: Thank you for the opportunity.
[00:25:38] Moderator: You've been listening to a podcast from JAMS, the world's largest private alternative dispute resolution provider. Our guests have been JAMS neutrals Daniel Garrie, Ryan Abbott and Judge Jackson Lucky. For more information about JAMS, please visit www.jamsadr.com. Thank you for listening to this podcast from JAMS.
Disclaimer:
This page is for general information purposes. JAMS makes no representations or warranties regarding its accuracy or completeness. Interested persons should conduct their own research regarding information on this website before deciding to use JAMS, including investigation and research of JAMS neutrals. See More