AI explained: Navigating AI in Arbitration - The SVAMC Guideline Effect
Description
Arbitrators and counsel can use artificial intelligence to improve service quality and lessen work burden, but they also must deal with the ethical and professional implications. In this episode, Rebeca Mosquera, a Reed Smith associate and president of ArbitralWomen, interviews Benjamin Malek, a partner at T.H.E. Chambers and former chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. They reveal insights and experiences on the current and future applications of AI in arbitration, the potential risks of bias and transparency, and the best practices and guidelines for the responsible integration of AI into dispute resolution. The duo discusses how AI is reshaping arbitration and what it means for arbitrators, counsel and parties.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Rebeca: Welcome to Tech Law Talks and our series on AI. My name is Rebeca Mosquera. I am an attorney with Reed Smith in New York focusing on international arbitration. Today we focus on AI in arbitration. How artificial intelligence is reshaping dispute resolution and the legal profession. Joining me is Benjamin Malek, a partner at THE Chambers and chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. Ben has extensive experience in commercial and investor state arbitration and is at the forefront of AI governance in arbitration. He has worked at leading institutions and law firms, advising on the responsible integration of AI into dispute resolution. He's also founder and CEO of LexArb, an AI-driven case management software. Ben, welcome to Tech Law Talks.
Benjamin: Thank you, Rebeca, for having me.
Rebeca: Well, let's dive in into our questions today. So artificial intelligence is often misunderstood, or put it in other words, there is a lot of misconceptions surrounding AI. How would you define AI in arbitration? And why is it important to look beyond just generative AI?
Benjamin: Yes, thank you so much for having me. AI in arbitration has existed for many years now, But it hasn't been until the rise of generative AI that big question marks have started to arise. And that is mainly because generative AI creates or generates AI output, whereas up until now, it was a relatively mild output. I'll give you one example. Looking for an email in your inbox, that requires a certain amount of AI. Your spellcheck in Word has AI, and it has been used for many years without raising any eyebrows. It hasn't been until ChatGPT has really given an AI tool to the masses that question started arising. What can it do? Will attorneys still be held accountable? Will AI start drafting for them? What will happen? And it's that fear that started generating all this talk about AI. Now, to your question on looking beyond generative AI, I think that is a very important point. In my function as the chair of the SAMC AI Task Force, while we were drafting the guidelines on the use of AI, one of the proposals was to call it use of generative AI in arbitration. And I'm very happy that we stood firm and said no, because there's many forms of AI that will arise over the years. Now we're talking about predictive AI, but there are many AI forms such as predictive AI, NLP, automations, and more. And we use it not only in generating text per se, but we're using it in legal research, in case prediction to a certain extent. Whoever has used LexisNexis, they're using a new tool now where AI is leveraged to predict certain outcomes, document automation, procedural management, and more. So understanding AI as a whole is crucial for responsible adoption.
Rebeca: That's interesting. So you're saying, obviously, that AI and arbitration is more than just chat GPT, right? I think that the reason why people think that and relies on maybe, as we'll see in some of the questions I have for you, that people may rely on chat GPT because it sounds normal. It sounds like another person texting you, providing you with a lot of information. And sometimes we just, you know, people, I can understand or I can see why people might believe that that's the correct outcome. And you've given examples of how AI is already being used and that people might not realize it. So all of that is very interesting. Now, tell me, as chair of the SVAMC AI Task Force, you've led significant initiatives in AI governance, right? What motivated the creation of the SVAMC AI guidelines? And what are their key objectives? And before you dive into that, though, I want to take a moment to congratulate you and the rest of the task force on being nominated once again for the GAR Awards, which will be unveiled during Paris Arbitration Week in April of this year. That's an incredible achievement. And I really hope you'll take pride in the impact of your work and the well-deserved recognition it continues to receive. So good luck to you and the rest of the team.
Benjamin: Thank you, Rebeca. Thank you so much. It really means a lot, and it also reinforces the importance of our work, seeing that we're nominated not only once last year for the GAR Award, but second year in a row. I will be blunt, I haven't kept track of many nominations, but I think it may be one of the first years where one initiative gets nominated twice, one year after the other. So that in itself for us is worth priding ourselves with. And it may potentially even be more than an award itself. It really, it's a testament to the work we have provided. So what led to the creation of the SVAMC AI guidelines? It's a very straightforward and to a certain extent, a little boring answer as of now, because we've heard it so many times. But the crux was Mata versus Avianca. I'm not going to dive into the case. I think most of us have heard it. Who hasn't? There's many sources to find out about it. The idea being that in a court case, an attorney used Chad GPT, used the outcome without verifying it, and it caused a lot of backlash, not only from opposing party, but also being chastised by the judge. Now when I saw that case, and I saw the outcome, and I saw that there were several tangential cases throughout the U.S. And worldwide, I realized that it was only a question of time until something like this could potentially happen in arbitration. So I got on a call with my dear friend Gary Benton at the SVAMC, and I told him that I really think that this is the moment for the Silicon Valley Arbitration Mediation Center, an institution that is heavily invested in tech to shine. So I took it upon myself to say, give me 12 months and I'll come up with guidelines. So up until now at the SVAMC, there are a lot of think tank-like groups discussing many interesting subjects. But the SVAMC scope, especially AI related, was to have something that produces something tangible. So the guidelines to me were intuitive. It was, I will be honest, I don't think I was the only one. I might have just been the first mover, but there we were. We created the idea. It was vetted by the board. And we came up first with the task force, then with the guidelines. And there's a lot more to come. And I'll leave it there.
Rebeca: Well, that's very interesting. And I just wanted to mention or just kind of draw from, you mentioned the Mata case. And you explained a bit about what happened in that case. And I think that was, what, 2023? Is that right? 2022, 2023, right? And so, but just recently we had another one, right? In the federal courts of Wyoming. And I think about two days ago, the order came out from the judge and the attorneys involved were fined about $15,000 because of hallucinations on the case law that they cited to the court. So, you know I see that happening anyway. And this is a major law firm that we're talking about here in the U.S. So it's interesting how we still don't learn, I guess. That would be my take on that.
Benjamin: I mean, I will say this. Learning is a relative term because learning, you need to also fail. You need to make mistakes to learn. I guess the crux and the difference is that up until now, at any law firm or anyone working in law would never entrust a first-year associate, a summer associate, a paralegal to draft arguments or to draft certain parts of a pleading by themselves without supervision. However, now, given that AI sounds sophisticated, because it has unlimited access to words and dictionaries, people assume that it is right. And that is where the problem starts. So I am obviously, personally, I am no one to judge a case, no one to say what to do. And in my capacity of the chair of the SVAMC AI task force, we also take a backseat saying these are soft law guidelines. However, submitting documents with information that has not been verified has, in my opinion, very little to do with AI. It has something to do with ethical duty and candor. And that is something that, in my opinion, if a court wants to fine attorneys, they're more welcome to do so. But that is something that should definitely be referred to the Bar Association to take measures. But again, these are my two cents as a citizen.
Rebeca: No, very good. Very good. So, you know, drawing from that point as well, and because of the cautionary tales we hear about surrounding these cases and many others that we've heard, many see AI as a double-edged sword, right? On the one hand, offering efficiency gains while raising concerns about bias and procedural fairness. What do you see as the biggest risk and benefits of AI in arbitration?
<p