DiscoverTech Law Talks
Tech Law Talks
Claim Ownership

Tech Law Talks

Author: Reed Smith

Subscribed: 28Played: 553
Share

Description

Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.
94 Episodes
Reverse
High tariffs would significantly impact data center projects, through increased costs, supply chain disruptions and other problems. Reed Smith’s Matthew Houghton, John Simonis and James Doerfler explain how owners and developers can attenuate tariff risks throughout the planning, contract drafting, negotiation, procurement and construction phases. In this podcast, learn about risk allocation and other proactive measures to manage cost and schedule challenges in today’s uncertain regulatory environment. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Matt: Hey, everyone. Welcome to this episode of our Data Center series. My name is Matt Houghton, joined today by Jim Doerfler and John Simonis. And in today's episode, we will discuss our perspectives regarding key tariff-related issues impacting data center projects that owners and developers should consider during initial planning, contract drafting, and negotiation and procurement and construction. So a bit about who we have here today. I'm Matt Houghton, counsel at Reed Smith based out of our San Francisco office. I focus on projects and construction-related matters both on the litigation and transaction sides. I was very excited to receive the invitation to moderate this podcast from two of my colleagues and thought leaders at Reed Smith in the area of capital projects, Mr. John Simonis and Mr. Jim Doerfler. And with that, I'm pleased to introduce them. John, why don't you go ahead and give the audience a brief rundown of your background?  John: Hi, I'm John Simonis. I'm a partner in the Real Estate Group, practicing out of the Orange County, California office. I've been very active in the data center space for many years, going back to the early years of digital realty. Over the years, I've handled a variety of transactions in the data center space, including acquisitions and dispositions, joint ventures and private equity transactions, leasing, and of course, construction and development. While our podcast today is primarily focused on the impacts of tariffs and trade restrictions on data center construction projects, I should note that we are seeing a great deal of focus on tariffs and trade restrictions by private equity and M&A investors. Given the potential impacts on ROIs, it should not be surprising that investors, like owners and developers, are laser-focused on tariffs and tariff uncertainty, both through diligence and risk allocation provisions. This means that sponsors can expect sophisticated investors to carefully diligence and review data center construction contracts and often require changes if they believe the tariff-related provisions are suboptic. Jim?  Jim: Yes, my name is Jim Doerfler. I'm a partner in our Pittsburgh office. I've been with Reed Smith now for over 25 years and have been focused on the projects and construction space. I would refer to myself as what I would call a bricks and sticks construction lawyer in that I focus on how projects are actually planned and built. I come to that by way of background in the sense that I grew up in a contractor family and I worked for a period of time as a project manager and a corporate officer for a commercial electrical contractor. And data center projects are the types of projects that we would have loved. There are projects that are complex. They have high energy demands. They have expensive equipment and lots of copper and fiber optics. In my practice at Reed Smith, I advise clients on commercial and industrial projects and do both claims and transactional work. And data of projects are sort of the biggest thing that we've seen come down the pipeline in some time. And so we're excited to talk to you about them here today.  Matt: Excellent. Thank you both. Really glad to be here with both of you. I always enjoy our conversations. I'm pretty sure this is the kind of thing we would be talking about, even if a mic wasn't on. So happy to be here. I want to start briefly with the choice of topic for today's podcast. Obviously, tariffs are at the forefront of construction-based considerations currently here in the U.S., but why are tariffs so important to data center project considerations?  Jim: So, this is Jim, and what I would say is that Reed Smith is a global law firm, and one of the things that we do in our projects in construction group is we try and survey the marketplace. And data center projects are such a significant part of the growth in the construction industry. In the U.S., for example, when we surveyed the available construction data from the available sources and subject matter experts, what we found is that at least for the past year or two, construction industry growth has been relatively flat aside from data center growth. And when you look at the growth of data centers and the drive for their being built by the growth in AI and other areas, it's really a growth industry for the construction and project space. And so something like tariffs that have the potential to impact those projects are particularly of concern to us. And so we want to make sure for our owner and developer clients and industry friends that we provided our perspectives on how to do these projects right.  Matt: That makes a lot of sense. So we've sort of set the stage for the discussion today. I think we could go on for hours if we didn't give ourselves some guidelines, but there are really three critical phases of a project that a owner or developer should be thinking about how they're going to address tariffs. And those are the initial planning, the contract drafting and negotiation, and then the procurement and construction phase. Since planning comes first, and of course, of the Titleist podcast is tariff-related considerations when planning a data center project. Let's start with the planning phase and some of the considerations an owner or developer may have at that time. John, what do you see as some of the key portions of the planning process where an owner or developer needs to start addressing tariff-related issues?  John: Tariffs and trade restrictions are getting a great deal of focus in all construction contracts. Tariffs impact steel and aluminum, rare earth materials. Data centers are big, expensive projects and can be impacted greatly. We're obviously in a period of great uncertainty as it relates to these types of restrictions. So I think in the planning stage, it may be somewhat obvious to say that that may be the most important time to mitigate to the extent possible some of the impacts. I think it starts in the RFP process. The requirements you're going to put on your design team and on your contractor to cooperate, collaborate, to mitigate to the extent possible the impacts of tariffs and particularly increased tariffs. You identify the materials and equipment subject to material tariffs and tariff risk. It increases, particularly those that might increase in the future, and I'd address those as best possible. You expect your team to be proposing potential mitigation measures, such as early release, substitutes, and other value engineering exercises. So that should be a very proactive dialogue. And you should be getting the commitment from the parties early in the RFP process and throughout the planning and pricing stage to cooperate with the owner to mitigate negative impacts, both in terms of cost, timing, and other supply chain issues. Jim, there's also some things we're seeing in the procurement space, and maybe you can address that.  Jim: Sure. So, you know, as you're going through the RFP phase and sort of anticipating what you would ultimately want to build into your contract and how you're going to procure it, you want to be thinking ahead about procurement-related items. As John indicated, these projects that are big and complicated and that involve significant and expensive equipment. So you want to be thinking about essentially your pre-construction phase and your early release packages, your equipment or your major material items. And you want to be talking with your trade partners in terms of allowing that equipment to get there in a timely fashion and also trying to lock down pricing to mitigate against the risk of tariff-related or generally trade-related disruptions that could affect either price or delivery issues. So you want to be thinking about facilitating deposits for long lead or big ticket material or equipment items. And you want to identify what are those big equipment or material items that could make or break your project and identify the risk associated with those items early on and build that into your planning process.  John: And there's some difference between different contracting models. If you were looking at a fixed price contract versus a cost plus for the GDP or a cost plus contract, obviously the risk allocation as it relates to tariff and trade restrictions might be handled differently. But generally speaking, we're seeing tariff and trade restriction risk being addressed very specifically in contracts now. So sophisticated owners and contractors are very specifically focusing on provisions that specifically address these risks and how they might be mitigated and allocated.  Jim: Just to follow up on John's point I mean in theory there are you could you could have a fixed price contract versus at least in the in the US what we would describe as cost plus or cost reimbursable projects using a guaranteed maximum price or a not to exceed cap style agreement in our experience at least in the US they tend to be more of the latter type of project delivery system and even if you had
Have you ever found yourself in a perplexing situation because of a lack of common understanding of key AI concepts? You're not alone. In this episode of "AI explained," we delve into Reed Smith's new Glossary of AI Terms with Reed Smith guests Richard Robbins, director of applied artificial intelligence, and Marcin Krieger, records and e-discovery lawyer. This glossary aims to demystify AI jargon, helping professionals build their intuition and ask informed questions. Whether you're a seasoned attorney or new to the field, this episode explains how a well-crafted glossary can serve as a quick reference to understand complex AI terms. The E-Discovery App is a free download available through the Apple App Store and Google Play. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Marcin: Welcome to Tech Law Talks and our series on AI. Today, we are introducing the Reed Smith AI Glossary. My name is Marcin Krieger, and I'm an attorney in the Reed Smith Pittsburgh office.  Richard: And I am Richard Robbins. I am Reed Smith's Director of Applied AI based in the Chicago office. My role is to help us as a firm make effective and responsible use of AI at scale internally.  Marcin: So what is the AI Glossary? The Glossary is really meant to break down big ideas and terms behind AI into really easy-to-understand definitions so that legal professionals and attorneys can have informed conversations and really conduct their work efficiently without getting buried in tech jargon. Now, Rich, why do you think an AI glossary is important?  Richard: So, I mean, there are lots of glossaries about, you know, sort of AI and things floating around. I think what's important about this one is it's written by and for lawyers. And I think that too many people are afraid to ask questions for fear that they may be exposed as not understanding things they think everyone else in the room understands. Too often, many are just afraid to ask. So we hope that the glossary can provide comfort to the lawyers who use it. And, you know, I think to give them a firm footing. I also think that it's, you know, really important that people do have a fundamental understanding of some key concepts, because if you don't, that will lead to flawed decisions, flawed policy, or choices can just miscommunicate with people in connection with you, with your work. So if we can have a firm grounding, establish some intuition, I think that we'll be in a better spot. Marcin, how would you see that?  Marcin: First of all, absolutely, I totally agree with you. I think that it goes even beyond that and really gets to the core of the model rules. When you look at the various ethics opinions that have come out in the last year about the use of AI, and you look at our ethical obligations and basic competence under Rule 1.1, we see that ethics opinions that were published by the ABA and by various state ethics boards say that there's a duty on lawyers to exercise the legal knowledge, skill, thoroughness, and preparation necessary for the representation. And when it comes to AI, you have to achieve that competence through some level of self-study. This isn't about becoming experts about AI, but to be able to competently represent a client in the use of generative AI, you have to have an understanding of the capabilities and the limitations, and a reasonable understanding about the tools and how the tech works. To put another way, you don't have to become an expert, but you have to at least be able to be in the room and have that conversation. So, for example, in my practice, in litigation and specifically in electronic discovery, we've been using artificial intelligence and advanced machine learning and various AI products previous to generative AI for well over a decade. And as we move towards generative AI, this technology works differently and it acts differently. And how the technology works is going to dictate how we do things like negotiate ESI protocols, how we issue protective orders, and also how we might craft protective orders and confidentiality agreements. So being able to identify how these types of orders restrict or permit the use of generative AI technology is really important. And you don't want to get yourself into a situation where you may inadvertently agree to allow the other side, the receiving party of your client's data, to do something that may not comply with the client's own expectations of confidentiality. Similarly, when you are receiving data from a producing party, you want to make sure that the way that you apply technology to that data complies with whatever restrictions may have been put in to any kind of protective order or confidentiality agreement.  Richard: Let me jump in and ask you something about that. So you've been down this path before, right? This is not the first time professionally you've seen new technology coming into play that people have to wrestle with. And as you were going through the prior use of machine learning and things that inform your work, how have you landed? You know, how often did you get into a confusing situation because people just didn't have a common understanding of key concepts where maybe a glossary like this would have helped or did you use things like that before?  Marcin: Absolutely. And it comes, it's cyclic. It comes in waves. Anytime there's been a major advancement in technology, there is that learning curve where attorneys have to not just learn the terminology, but also trust and understand how the technology works. Even now, technology that was new 10 years ago still continues to need to be described and defined even outside of the context of AI things like just removing email threads almost every ESI order that we work with requires us to explain and define what that process looks like when we talk about traditional technology assisted review to this day our agreements have to explain and describe to a certain level how technology-assisted review works. But 10 years ago, it required significant investment of time negotiating, explaining, educating, not just opposing counsel, but our clients.  Richard: I was going to ask about that, right? Because. It would seem to me that, you know, especially at the front end, as this technology evolves, it's really easy for us to talk past each other or to use words and not have a common understanding, right?  Marcin: Exactly, exactly. And now with generative AI, we have exponentially more terminology. There's so many layers to the way that this technology works that even a fairly skilled attorney like myself, when I first started learning about generative AI technology, I was completely overwhelmed. And most attorneys don't have the time or the technical understanding to go out into the internet and find that information. A glossary like this is probably one of the best ways that an attorney can introduce themselves to the terminology or have a reference where if they see a term that they are unfamiliar with, quickly go take a look at what does that term mean? What's the implication here? Get that two sentence description so that they can say, okay, I get what's going on here or put the brakes on and say, hey, I need to bring in one of my tech experts at this point.  Richard: Yeah, I think that's really important. And this kind of goes back to this notion that this glossary was prepared, you know, at least initially, right, for, you know, from the litigator's lens, litigator's perspective. But it's really useful well beyond that. And, you know, I mean, I think the biggest need is to take the mystery out of the jargon, to help people, you know, build their intuition, to ask good questions. And you touched on something where you said, well, I've got a, I don't need to be a technical expert on a given topic, but I need a tight. Accessible description that lets me get the essence of it. So, I mean, a couple of my, you know, favorite examples from the glossary are, you know, in the last year or so, we've heard a lot of people talking about RAG systems and they fling that phrase around, you know, retrieval augmented generation. And, you know, you could sit there and say to someone, yeah, use that label, but what is it? Well, we describe that in three tight sentences. Agentic AI, two sentences.  Marcin: And that's a real hot topic for 2025 is agentic AI.  Richard: Yep.  Marcin: And nobody knows what it is. So I focus a lot on litigation and in particular electronic discovery. So I have a very tight lens on how we use technology and where we use it. But in your role, you deal with attorneys in every practice group and also professionally outside of the law firm. You deal with professionals and technologists. In your experience, how do you see something like this AI glossary helping the people that you work with and what kind of experience levels you get exposed to?  Richard: Yeah, absolutely. So I keep coming back to this phrase, this notion of saying it's about helping people develop an intuition for when and how to use things appropriately, what to be concerned about. So a glossary can help to demystify things. These concepts so that you can then carry on whatever it is that you're doing. And so I know that's rather vague and abstract, but I mean, at the end of the day, if you can get something down to a couple of quick sentences and the key essence of it, and that light bulb comes on and people go, ah, now I kind of understand what we're talking about, that will help them guide their conversations about what they should be concerned about or not concerned about. And so, you know, that glossary
Arbitrators and counsel can use artificial intelligence to improve service quality and lessen work burden, but they also must deal with the ethical and professional implications. In this episode, Rebeca Mosquera, a Reed Smith associate and president of ArbitralWomen, interviews Benjamin Malek, a partner at T.H.E. Chambers and former chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. They reveal insights and experiences on the current and future applications of AI in arbitration, the potential risks of bias and transparency, and the best practices and guidelines for the responsible integration of AI into dispute resolution. The duo discusses how AI is reshaping arbitration and what it means for arbitrators, counsel and parties. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Rebeca: Welcome to Tech Law Talks and our series on AI. My name is Rebeca Mosquera. I am an attorney with Reed Smith in New York focusing on international arbitration. Today we focus on AI in arbitration. How artificial intelligence is reshaping dispute resolution and the legal profession. Joining me is Benjamin Malek, a partner at THE Chambers and chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. Ben has extensive experience in commercial and investor state arbitration and is at the forefront of AI governance in arbitration. He has worked at leading institutions and law firms, advising on the responsible integration of AI into dispute resolution. He's also founder and CEO of LexArb, an AI-driven case management software. Ben, welcome to Tech Law Talks.  Benjamin: Thank you, Rebeca, for having me.  Rebeca: Well, let's dive in into our questions today. So artificial intelligence is often misunderstood, or put it in other words, there is a lot of misconceptions surrounding AI. How would you define AI in arbitration? And why is it important to look beyond just generative AI?  Benjamin: Yes, thank you so much for having me. AI in arbitration has existed for many years now, But it hasn't been until the rise of generative AI that big question marks have started to arise. And that is mainly because generative AI creates or generates AI output, whereas up until now, it was a relatively mild output. I'll give you one example. Looking for an email in your inbox, that requires a certain amount of AI. Your spellcheck in Word has AI, and it has been used for many years without raising any eyebrows. It hasn't been until ChatGPT has really given an AI tool to the masses that question started arising. What can it do? Will attorneys still be held accountable? Will AI start drafting for them? What will happen? And it's that fear that started generating all this talk about AI. Now, to your question on looking beyond generative AI, I think that is a very important point. In my function as the chair of the SAMC AI Task Force, while we were drafting the guidelines on the use of AI, one of the proposals was to call it use of generative AI in arbitration. And I'm very happy that we stood firm and said no, because there's many forms of AI that will arise over the years. Now we're talking about predictive AI, but there are many AI forms such as predictive AI, NLP, automations, and more. And we use it not only in generating text per se, but we're using it in legal research, in case prediction to a certain extent. Whoever has used LexisNexis, they're using a new tool now where AI is leveraged to predict certain outcomes, document automation, procedural management, and more. So understanding AI as a whole is crucial for responsible adoption.  Rebeca: That's interesting. So you're saying, obviously, that AI and arbitration is more than just chat GPT, right? I think that the reason why people think that and relies on maybe, as we'll see in some of the questions I have for you, that people may rely on chat GPT because it sounds normal. It sounds like another person texting you, providing you with a lot of information. And sometimes we just, you know, people, I can understand or I can see why people might believe that that's the correct outcome. And you've given examples of how AI is already being used and that people might not realize it. So all of that is very interesting. Now, tell me, as chair of the SVAMC AI Task Force, you've led significant initiatives in AI governance, right? What motivated the creation of the SVAMC AI guidelines? And what are their key objectives? And before you dive into that, though, I want to take a moment to congratulate you and the rest of the task force on being nominated once again for the GAR Awards, which will be unveiled during Paris Arbitration Week in April of this year. That's an incredible achievement. And I really hope you'll take pride in the impact of your work and the well-deserved recognition it continues to receive. So good luck to you and the rest of the team.  Benjamin: Thank you, Rebeca. Thank you so much. It really means a lot, and it also reinforces the importance of our work, seeing that we're nominated not only once last year for the GAR Award, but second year in a row. I will be blunt, I haven't kept track of many nominations, but I think it may be one of the first years where one initiative gets nominated twice, one year after the other. So that in itself for us is worth priding ourselves with. And it may potentially even be more than an award itself. It really, it's a testament to the work we have provided. So what led to the creation of the SVAMC AI guidelines? It's a very straightforward and to a certain extent, a little boring answer as of now, because we've heard it so many times. But the crux was Mata versus Avianca. I'm not going to dive into the case. I think most of us have heard it. Who hasn't? There's many sources to find out about it. The idea being that in a court case, an attorney used Chad GPT, used the outcome without verifying it, and it caused a lot of backlash, not only from opposing party, but also being chastised by the judge. Now when I saw that case, and I saw the outcome, and I saw that there were several tangential cases throughout the U.S. And worldwide, I realized that it was only a question of time until something like this could potentially happen in arbitration. So I got on a call with my dear friend Gary Benton at the SVAMC, and I told him that I really think that this is the moment for the Silicon Valley Arbitration Mediation Center, an institution that is heavily invested in tech to shine. So I took it upon myself to say, give me 12 months and I'll come up with guidelines. So up until now at the SVAMC, there are a lot of think tank-like groups discussing many interesting subjects. But the SVAMC scope, especially AI related, was to have something that produces something tangible. So the guidelines to me were intuitive. It was, I will be honest, I don't think I was the only one. I might have just been the first mover, but there we were. We created the idea. It was vetted by the board. And we came up first with the task force, then with the guidelines. And there's a lot more to come. And I'll leave it there.  Rebeca: Well, that's very interesting. And I just wanted to mention or just kind of draw from, you mentioned the Mata case. And you explained a bit about what happened in that case. And I think that was, what, 2023? Is that right? 2022, 2023, right? And so, but just recently we had another one, right? In the federal courts of Wyoming. And I think about two days ago, the order came out from the judge and the attorneys involved were fined about $15,000 because of hallucinations on the case law that they cited to the court. So, you know I see that happening anyway. And this is a major law firm that we're talking about here in the U.S. So it's interesting how we still don't learn, I guess. That would be my take on that.  Benjamin: I mean, I will say this. Learning is a relative term because learning, you need to also fail. You need to make mistakes to learn. I guess the crux and the difference is that up until now, at any law firm or anyone working in law would never entrust a first-year associate, a summer associate, a paralegal to draft arguments or to draft certain parts of a pleading by themselves without supervision. However, now, given that AI sounds sophisticated, because it has unlimited access to words and dictionaries, people assume that it is right. And that is where the problem starts. So I am obviously, personally, I am no one to judge a case, no one to say what to do. And in my capacity of the chair of the SVAMC AI task force, we also take a backseat saying these are soft law guidelines. However, submitting documents with information that has not been verified has, in my opinion, very little to do with AI. It has something to do with ethical duty and candor. And that is something that, in my opinion, if a court wants to fine attorneys, they're more welcome to do so. But that is something that should definitely be referred to the Bar Association to take measures. But again, these are my two cents as a citizen.  Rebeca: No, very good. Very good. So, you know, drawing from that point as well, and because of the cautionary tales we hear about surrounding these cases and many others that we've heard, many see AI as a double-edged sword, right? On the one hand, offering efficiency gains while raising concerns about bias and procedural fairness. What do you see as the biggest risk and benefits of AI in arbitration?  Benjamin: So it's an interesting question. To a certain extent, we tried to address many of the risks in the AI guidelines. Whoever hasn't looked at
Partners Catherine Castaldo, Andy Splittgerber, Thomas Fischl and Tyler Thompson discuss various recent AI acts around the world, including the EU AI Act and the Colorado AI Act, as well as guidance from the European Data Protection Board (EDPB) on AI models and data protection. The team presents an in-depth explanation of the different acts and points out the similarities and differences between the two. What should we do today, even though the Colorado AI Act is not in effect yet? What do these two acts mean for the future of AI? ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hello, everyone, and thanks again for joining us on Tech Law Talks. We're here with a really good array of colleagues to talk to you about the EU AI Act, the Colorado AI Act, the EDPB guidance, and we'll share some of those initials soon on what they all mean. But I'm going to let my colleagues introduce themselves. Before I do that, though, I'd like to say if you like our content, please consider giving us a five-star review wherever you find us. And let's go ahead and first introduce my colleague, Andy.  Andy: Yeah, hello, everyone. My name is Andy Splittgerber. I'm a partner at Reed Smith in the Emerging Technologies Department based out of Munich in Germany. And looking forward to discussing with you interesting data protection topics.  Thomas: Hello, everyone. This is Thomas, Thomas Fischl in Munich, Germany. I also focus on digital law and privacy. And I'm really excited to be with you today on this podcast.  Tyler: Hey everyone, thanks for joining. My name is Tyler Thompson. I'm a partner in the emerging technologies practice at Reed Smith based in the Denver, Colorado office.  Catherine: And I'm Catherine Castaldo, a partner in the New York office. So thanks to all my colleagues. Let's get started. Andy, can you give us a very brief overview of the EU AI app?  Andy: Sure, yeah. It came into force in August 2024. And it is a law about mainly the responsible use of AI. Generally, it is not really focused on data protection matters. Rather, it is next to the world-famous European Data Protection Regulation. It has a couple of passages where it refers to the GDPR and also sometimes where it states that certain data protection impact assessments have to be conducted. Other than that, it has its own concept dividing up AI systems. And we're just expecting a new guidance on how authorities and how the commission interprets what AI systems are. So watch out for that. Into different categories, prohibited AI, high-risk AI, and then normal AI systems. There are also special rules on generative AI, and then some rules on transparency requirements when organizations use AI towards ends customers. And depending on these risk categories, there are certain requirements, and attaching to each of these categories, developers, importers, and also users as like organizations of AI have to comply with certain obligations around accountability, IT security, documentation, checking, and of course, human intervention and monitoring. This is the basic concept and the rules start to kick in February 2nd, 2025 when prohibited AI must not be used anymore in Europe. And the next bigger wave will be on August 2nd, 2025 when the rules on generative AI kick in. So organizations should start and be prepared to comply with these rules now and get familiar with this new type of law. It's kind of like a new area of law.  Catherine: Thanks for that, Andy. Tyler, can you give us a very brief overview of the Colorado AI Act?  Tyler: Sure, happy to. So Colorado AI Act, this is really the first comprehensive AI law in the United States. Passed at the end of the 2024 legislative session. it covers developers or deployers that use a high-risk AI system. Now, what is a high-risk AI system? It's just a system that makes a consequential decision. What is a consequential decision? These can include things like education decisions, employment opportunities, employment related decisions, financial lending service decisions, if it's an essential government service, a healthcare service, housing, insurance, legal services. So that consequential decision piece is fairly broad. The effective date of it is February 1st of 2026, and the Colorado AG is going to be enforcing it. There's no private right of action here, but violating the Colorado AEI Act is considered an unfair and deceptive trade practice under Colorado law. So that's where you get the penalties of the Colorado AEI Act. It's tied into the Colorado deceptive trade practice.  Catherine: That's an interesting angle. And Tom, let's turn to you for a moment. I understand that the European Data Protection Board, or EDPB, has also recently released some guidance on data protection in connection with artificial intelligence. Can you give us some high-level takeaways from that guidance?  Thomas: Sure, Catherine, and it's very true that the EDPB has just released a statement. It actually has been released in December of last year. And yeah, they have released that highly anticipated statement on AI models and data protection. This statement of the EDPB follows actually a much-discussed paper published by the German Hamburg Data Protection Authority in July of last year. And I also wanted to briefly touch upon this paper. Because the Hamburg Authority argued that AI models, especially large language models, are anonymous when considered separately. They do not involve the processing of personal data. To reach this conclusion, the paper decoupled the model itself from, firstly, the prior training of the model, which may involve the collection and further processing of personal data as part of the training data set. And secondly, the subsequent use of the model, where a prompt may contain personal data and output may be used in a way that means it represents personal data. And interestingly, this paper considered only the AI model itself and concluded that the tokens and values that make up the inner processes of a typical AI model do not meaningfully relate to or correspond with information about identifiable individuals. And consequently, the model itself was classified as anonymous, even if personal data is processed during the development and the use of the model. So the EDPB statement, recent statement, does actually not follow this relatively simple and secure framework proposed by the German authority. The EDPB statement responds actually to a request from the Irish Data Protection Commission and gives kind of a framework, just particularly with respect to certain aspects. It actually responds to four specific questions. And the first question was, so under what conditions can AI models be considered anonymous? And the EDPB says, well, yes, it can be considered anonymous, but only in some cases. So it must be impossible with all likely means to obtain personal data from the model either through attacks aimed at extracting the original training data or through other interactions with the AI model. The second and third questions relate to the legal basis of the use and the training of AI models. And the EDPB answered those questions in one answer. So the statement indicates that the development and use of AI models can. Generally be based on a legal basis of legitimate interest, then the statement lists a variety of different factors that need to be considered in the assessment scheme according to Article 6 GDPR. So again, it refers to an individual case-by-case analysis that has to be made. And finally, the EDPB addresses the highly practical question of what consequences it has for the use of an AI model if it was developed in violation of data protection regulations. The EDPB says, well, this partly depends on whether the EI model was first anonymized before it was disclosed to the model operator. And otherwise, the model operator may need to assess the legality of the model's development as part of their accountability obligations. So quite interesting statement.  Catherine: Thanks, Tom. That's super helpful. But when I read some commentary on this paper, there's a lot of criticism that it's not very concrete and doesn't provide actionable guidance to businesses. Can you expand on that a little bit and give us your thoughts?  Thomas: Yeah, well, as is sometimes the case with these EDPB statements, which necessarily reflect the consensus opinion of authorities from 27 different member states. The statement does not provide many clear answers. So instead, the EDPP offers kind of indicative guidelines and criteria and calls for case-by-case assessments of AI models to understand whether and how they are affected by the GDPR. And interestingly, someone has actually counted how often the phrase case-by-case appears in the statement. It appears actually 16 times. and can or could appears actually 161 times so. Obviously, this is likely to lead to different approaches among data protection authorities, but it's maybe also just an intended strategy of the EDPB. Who knows?  Catherine: Well, as an American, I would read that as giving me a lot of flexibility.  Thomas: Yeah, true.  Catherine: All right, let's turn to Andy for a second. Andy, also in view of the AI Act, what do you now recommend organizations do when they want to use generative AI systems?  Andy: That's a difficult question after 161 cans and goods. We always try to give practical advice. And I mean, with regard, like if you now look at the AI Act plus this EDPB paper or generally GDPR, there are a couple of items where organizations can prepare and need to prepare. First of all, organ
Catherine Castaldo, Christian Leuthner and Asélle Ibraimova dive into the implications of the new Network and Information Security (NIS2) Directive, exploring its impact on cybersecurity compliance across the EU. They break down key changes, including expanded sector coverage, stricter reporting obligations and tougher penalties for noncompliance. Exploring how businesses can prepare for the evolving regulatory landscape, they share insights on risk management, incident response and best practices. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hi, and welcome to Tech Law Talks. My name is Catherine Castaldo, and I am a partner in the New York office in the Emerging Technologies Group, focusing on cybersecurity and privacy. And we have some big news with directives coming out of the EU for that very thing. So I'll turn it to Christian, who can introduce himself.  Christian: Thanks, Catherine. So my name is Christian Leuthner. I'm a partner at the Reed Smith Frankfurt office, also in the Emerging Technologies Group, focusing on IT and data. And we have a third attorney on this podcast, our colleague, Asélle.  Asélle: Thank you, Christian. Very pleased to join this podcast. I am counsel based in Reed Smith's London office, and I also am part of emerging technologies group and work on data protection, cybersecurity, and technology issues.  Catherine: Great. As we previewed a moment ago, on October 17th, 2024, there was a deadline for the transposition of a new directive, commonly referred to as NIS2. And for those of our listeners who might be less familiar, would you tell us what NIS2 stands for and who is subject to it?  Christian: Yeah, sure. So NIS2 stands for the Directive on Security of Network and Information Systems. And it is the second iteration of the EU's legal framework for enhancing the cybersecurity of critical infrastructures and digital services, it will replace what replaces the previous directive, which obviously is called NIS1, which was adopted in 2016, but had some limitations and gaps. So NIS2 applies to a wider range of entities that provide essential or important services to the society and the economy, such as energy, transport, health, banking, digital infrastructure, cloud computing, online marketplaces, and many, many more. It also covers public administrations and operators of electoral systems. Basically, anyone who relies on network and information systems to deliver their services and whose disruptions or compromise could have significant impacts on the public interest, security or rights of EU citizens and businesses will be in scope of NIS2. As you already said, Catherine, NIS2 had to be transposed into national member state law. So it's a directive, not a regulation, contrary to DORA, which we discussed the last time in our podcast. It had to be implemented into national law by October 17th, 2024. But most of the member states did not. So the EU Commission has now started investigations regarding the violations of the treaty of the functioning of the European Union against, I think, 23 member states as they have not yet implemented NIS2 into national law.  Catherine: That's really comprehensive. Do you have any idea what the timeline is for the implementation?  Christian: It depends on the state. So there are some states that have already comprehensive drafts. And those just need to go through the legislative process. In Germany, for example, we had a draft, but we have elections in a few weeks. And the current government just stated that they will not implement the law before that. And so after the election, the implementation law will be probably discussed again, redrafted. And so it'll take some time. It might be in the third quarter of this year.  Catherine: Very interesting. We have a similar process. Sometimes it happens in the States where things get delayed. Well, what are some of the key components?  Asélle: So, NIS2 focuses on cybersecurity measures, and we need to differentiate it from the usual cybersecurity measures that any organization thinks about in the usual way where they protect their data, their systems against cyber attacks or incidents. So the purpose of this legislation is to make sure there is no disruption to the economy or to others. And in that sense, the similar kind of notions apply. Organizations need to focus on ensuring availability, authenticity, integrity, confidentiality of data and protect their data and systems against all hazards. These notions are familiar to us also from the GDPR kind of framework. So there are 10 cybersecurity risk management measures that NIS2 talks about, and this is policies on risk analysis and information system security, incident handling, business continuity and crisis management, supply chain security. Security in systems acquisition, development, and maintenance, policies to assess the effectiveness of measures, basic cyber hygiene practices, and training, cryptography and encryption, human resources security training, use of multi-factor authentication. So these are familiar notions also. And it seems the general requirements are something that organizations will be familiar with. However, the European Commission in its NIS Investments Report of November 2023 has done research, a survey, and actually found that organizations that are subject to NIS2 didn't really even take these basic measures. Only 22% of those surveyed had third-party risk management in place, and only 48% of organizations had top management involved in approving cybersecurity risk policies and any type of training. And this reduces the general commitment of organizations to cybersecurity. So there are clearly gaps, and NAS2 is trying to focus on improving that. There are other couple of things that I wanted to mention that are different from NIS1 and are important. So as Christian said, essential entities are different, have different regime, compliance regime applied to them compared with important entities. Essential entities need to systematically document their compliance and be prepared for regular monitoring by regulators, including regular inspections by competent authorities, whereas important entities only are obliged to kind of be in touch and communicate with competent authorities in case of security incidents. And there is an important clarification in terms of the supply chain, these are the questions we receive from our clients. And the question is, does the supply chain mean anyone that provides services or products? And from our reading of the legislation, supply chain only relates to ICT products and ICT services. Of course, there is a proportionality principle employed in this legislation, as with usually most of the European legislation, and there is a size threshold. The legislation only applies to those organizations who exceed the medium threshold. And two more topics, and I'm sorry that I'm kind of taking over the conversation here, but I thought the self-identification point was important because in the view of the European Commission, the original NIS1 didn't cover the organizations it intended to cover and so in the European Commission's view, the requirements are so clear in terms of which entities it applies to, that organizations should be able to assess it and register, identify themselves with the relevant authorities by April this year. And the last point, digital infrastructure organizations, their nature is specifically kind of taken into consideration, their cross-border nature. And if they provide services in several member states, there is a mechanism for them to register with the competent authority where their main establishment is based, similar to the notion under the GDPR.  Catherine: It sounds like, though, there's enough information in the directive itself without waiting for the member state implementation that companies who are subject to this rule could be well on their way to being compliant by just following those principles.  Christian: That's correct. So even if the implementation international law is currently not happening. All of the member states, companies can already work to comply with NIS2. So once the law is implemented, they don't have to start from zero. NIS2 sets out the requirements that important and essential entities under NIS2 have to comply with. For example have a proper information security management system have supply chain management train their employees and so they can already work to implement NIS2 and the the directive itself also has an access that sets out the sectors and potential entities that might be in scope of NIS2 And the member states cannot really vary from those annexes. So if you are already in scope of NIS2 under the information that is in the directive itself, you can be sure that you would probably also have to comply with your national rules. There might be some gray areas where it's not fully clear if someone is in scope of NIS2 and those entities might want to wait for the national implementation. And it also can happen that the national implementation goes beyond the directive and covers sectors or entities that might not be in scope under the directive itself. And then of course they will have to work to implement the requirements then. I think a good starting point anyways is the existing security program that companies already hopefully have in place so if they for example have an ISO 27001 framework implemented it might be good to start but with a mapping exercise what NIS2 might require in addition to the ISO 27001. And then look if this should be implemented
Tyler Thompson sits down with Abigail Walker to break down the Colorado AI Act, which was passed at the end of the 2024 legislative session to prevent algorithmic discrimination. The Colorado AI Act is the first comprehensive law in the United States that directly and exclusively targets AI and GenAI systems. ----more---- Transcript:  Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Tyler: Hi, everyone. Welcome back to the Tech Law Talks podcast. This is continuing Reed Smith's AI series, and we're really excited to have you here today and for you to be with us. The topic today, obviously, AI and the use of AI is surging ahead. I think we're all kind of waiting for that regulatory shoe to drop, right? We're waiting for when it's going to come out to give us some guardrails or some rules around AI. And I think everyone knows that this is going to happen whether businesses want it to or not. It's inevitable that we're going to get some more rules and regulations here. Today, we're going to talk about what I see as truly the first or one of the first ones of those. That's the Colorado AI Act. It's really the first comprehensive AI law in the United States. So there's been some kind of one-off things and things that are targeted to more privacy, but they might have implications for AI. The Colorado AI Act is really the first comprehensive law in the United States that directly targets AI and generative AI and is specific for those uses, right? The other reason why I think this is really important is because Abigail and I were talking, we see this as really similar to what happened with privacy for the folks that are familiar with that. And this is something where privacy a few years back, it was very known that this is something that needed some regulations that needed to be addressed in the United States. After an absence of any kind of federal rulemaking on that, California came out with their CCPA and did a state-specific rule, which has now led to an explosion of state-specific privacy laws. I personally think that that's what we could see with AI laws as well, is that, hey, Colorado is the first mover here, but a lot of other states will have specific AI laws in this model. There are some similarities, but some key differences to things like the EU AI Act and some of the AI frameworks. So if you're familiar with that, we're going to talk about some of the similarities and differences there as we go through it. And kind of the biggest takeaway, which you will be hearing throughout the podcast, which I wanted to leave you with right up at the start, is that you should be thinking about compliance for this right now. This is something that as you hear about the dates, you might know that we've got some runway, it's a little bit away. But really, it's incredibly complex and you need to think about it right now and please start thinking about it. So as for introductions, I'll start with myself. My name is Tyler Thompson. I'm a partner at the law firm of Reed Smith in the Emerging Technologies Practice. This is what my practice is about. It's AI, privacy, tech, data, basically any nerd type of law, that's me. And I'll pass it over to Abigail to introduce herself. Abigail: Thanks, Tyler. My name is Abigail Walker. I'm an associate at Reed Smith, and my practice focuses on all things related to data privacy compliance. But one of my key interests in data privacy is where it intersects with other areas of the law. So naturally, watching the Colorado AI Act go through the legislative process last year was a big pet project of mine. And now it's becoming a significant part of my practice and probably will be in the future. Tyler: So the Colorado AI Act was passed at the very end of the 2024 legislative session. And it's largely intended to prevent algorithmic discrimination. And if you're asking yourself, well, what does that mean? What is algorithmic discrimination? In some sense, that is the million-dollar question, but we're going to be talking about that in a little bit of detail as we go through this podcast. So stay tuned and we'll go into that in more detail. Abigail: So Tyler, this is a very comprehensive law and I doubt we'll be able to cover everything today, but I think maybe we should start with the basics. When is this law effective and who's enforcing it and how is it being enforced? So the date that you need to remember is February 1st of 2026. So there is some runway here, but like I said at the start, even though we have a little bit of runway, there's a lot of complexity and I think it's something that you should start now. As far as enforcement, it's the Colorado AG. The Colorado Attorney General is going to be tasked with enforcement here. A bit of good news is that there's no private right of action. So the Colorado AG has to bring the enforcement action themselves. You are not under risk of being sued for the Colorado Privacy Act from an individual plaintiff. Maybe the bad news here is that violating the Colorado AI Act will be considered an unfair and deceptive trade practice under Colorado law. So the trade practice regulation, that's something that exists in Colorado law like it does in a variety of state laws. And a violation of the Colorado AI Act can be a violation of that as well. And so that just really brings the AI Act into some of this overarching rules and regulations around deceptive trade practices. And that really increases the potential liability, your potential for damages. And I think also just from a perception point, it puts the Colorado AI Act violation in some of these kind of consumer harm violations, which tend to just have a very bad perception, obviously, to your average state consumer. The law also gives the Attorney General a lot of power in terms of being able to ask covered entities for certain documentation. We're going to talk about that as we get into the podcast here. But the AG also has the option to issue regulations that further specify some of the requirements of this law. That's the thing that we're really looking forward to is additional regulations here. As we go through the podcast today, you're going to realize there seems like there's a lot of gray area. And you'd be right, there is a lot of gray area. And that's what we're hoping some of the regulations will come out and try to reduce that amount of uncertainty as we move forward. Abigail, can you tell us who does the law apply to and who needs to have their ducks in a row for the AGE by the time we hit next February? Abigail: Yeah. So unlike Colorado's privacy law, which has like a pretty large like processing threshold that entities have to reach to be covered, this law applies to anyone doing business in Colorado that develops or deploys a high-risk AI system. Tyler: Well, that high-risk AI system sentence, it feels like you used a lot of words there that have a real legal significance. Abigail: Oh, yes. This law has a ton of definitions, and they do a lot of work. I'll start with a developer. A developer, you can think of just as the word implies. They are entities that are either building these systems or substantially modifying them. And then deployers are the other key players in this law. Deployers are entities that deploy these systems. So what does deploy actually mean? The law defines deploy as to use. So basically, it's pretty broad. Tyler: Yeah, that's quite broad. Not the most helpful definition I've heard. So if you're using a high-risk AI system and you do business in Colorado, basically you're a deployer. Abigail: Yes. And I will emphasize the fact that it only applies to most of the requirements of the law. Only apply to high-risk AI systems. And I can get into what that means. High-risk, for the purpose of this law, refers to any AI system that makes or is a substantial factor in making a consequential decision. Tyler: What is a consequential decision? Abigail: They are decisions that produce legal or substantially similar effects. Tyler: Substantially similar. Abigail: Yeah. Basically, as I'm sure you're wondering, what does substantially similar mean? We're going to have to see how that plays out when enforcement starts. But I can get into what the law considers to be legal effects, and I think this might highlight or shed some light on what substantially similar means. The law kind of outlines scenarios that are considered consequential. These include education enrollment, educational opportunities, employment or employment opportunities, financial or lending service, essential government services, health care services, housing, insurance, and legal services. Tyler: So we've already gone through a lot. So I think this might be a good time to just pause and put this into perspective, maybe give an example. So let's say your recruiting department or your HR department uses, aka deploys an AI tool to scan job applications or job application cover letters for certain keywords. And those applicants that don't use those keywords get put in the no pile or, hey, this cover letter, it's not talking about what we want to talk about, but we're going to reject them. They're going to go on the no pile of resumes. What do you think about that, Abigail? Abigail: I see that as kind of falling into that employment opportunity category that the law identifies. And I feel like that's kind of almost like falling into that substantially similar thing when it comes to substantially similar to legal effects. I think that use would be covered in this situation. Tyler: Yeah, a lot of uncertainty here, but I think we're all guessing until enforcement
Catherine Castaldo, Christian Leuthner and Asélle Ibraimova break down DORA, the Digital Operational Resilience Act, which is new legislation that aims to enhance the cybersecurity and resilience of the financial sector in the European Union. DORA sets out common standards and requirements for these entities so they can identify, prevent, mitigate and respond to cyber threats and incidents as well as ensure business continuity and operational resilience. The team discusses the implications of DORA and offers insights on applicability, obligations and potential liability for noncompliance. This episode was recorded on 17 January 2025. ----more---- Transcript:  Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hi, everyone. I'm Catherine Castaldo, a partner in the New York office of Reed Smith, and I'm in the EmTech Group. And I'm here today with my colleagues, Christian and Asélle, who I'll introduce themselves. And we're going to talk to you about DORA. Go ahead, Christian. Christian: Hi, I'm Christian Leuthner. I'm a Reed Smith partner in the Frankfurt office, focusing on IT and data protection law.  Asélle: And I'm Asélle Ibraimova. I am a council based in London. And I'm also part of the EmTech group, focusing on tech, data, and cybersecurity.  Catherine: Great. Thanks, Asélle and Christian. Today, when we're recording this, January 17th, 2025, is the effective date of this new regulation, commonly referred to as DORA. For those less familiar, would you tell us what DORA stands for and who is subject to it? Christian: Yeah, sure. So DORA stands for the Digital Operational Resilience Act, which is a new regulation that aims to enhance the cybersecurity and resilience of the financial sector in the European Union. It applies to a wide range of financial entities, such as banks, insurance companies, investment firms, payment service providers, crypto asset service providers, and even to critical third-party providers that offer services to the financial sector. DORA sets out common standards and requirements for these entities to identify, prevent, mitigate, and respond to cyber threats and incidents as well, as to ensure business continuity and operational resilience.  Catherine: Oh, that's comprehensive. Is there any entity who needs to be more concerned about it than others, or is it equally applicable to all of the ones you listed?  Asélle: I can jump in here. So DORA is a piece of legislation that wants to respect proportionality and allow organizations to deal with DORA requirements that will be proportionate to their size, to the nature of the cybersecurity risks. So, for example, micro-enterprises or certain financial entities that have only a small number of members will have a simplified ICT risk management framework under DORA. I also wanted to mention that DORA applies to financial entities that are outside of the EU, but provide services in the EU so they will be caught. And maybe just to also add in terms of the risks. It's not only the size of the financial entities that matter in terms of how they comply with the requirements of DORA, but also the cybersecurity risk. So let's say an ICT third-party service provider, the risk of that entity will depend on the nature of that service, on the complexity, on whether that service supports critical or important function of the financial entity, generally dependence on ICT service provider and ultimately on its potential to disrupt the services of that financial entity.  Catherine: So some of our friends might just be learning about this by listening to the podcast. So what does ICT stand for, Asélle?  Asélle: It is informational communication technology. So in other words, it's anything that a financial entity receives as a service or a product digitally. It also covers ICT services provided by a financial entity. So, for example, if a financial entity offers a platform for fund or investment management or a piece of software or its custodian services are provided digitally, those services will also be considered an ICT service. And those financial entities will need to cover their customer-facing contracts as well and make sure DORA requirements are covered in the contracts.  Catherine: Thank you for that. What are some of the risks for noncompliance? Christian: The risks for noncompliance with DORA are significant and could entail both financial and reputational consequences. First of all, DORA empowers the authorities to impose administrative sanctions and corrective measures on entities that breach its provisions. Which could range from warnings and reprimands to fines and penalties to withdrawals of authorization and licenses, which could have significant impact on the business of all the entities. The level of sanctions and measures will depend on the nature, gravity and duration of the breach, as well as on the entity's cooperation and remediation efforts. So better be positive to help the authority in case they identify the breach. Second, non-compliance with DORA could also expose entities to legal actions and claims from the customers, investors, or other parties that might suffer losses or damages as a result of cyber incident or disruption of service. And third, non-compliance with DORA could also damage the entity's reputation and trustworthiness in the market and affect its competitive advantage and customer loyalty. Therefore, entities should take DORA seriously and ensure that they comply with its requirements and expectations.  Catherine: If I haven't been able to start considering DORA, and I think it might be applicable to me, where should I start?  Asélle: It's actually a very interesting question. So from our experience. We see large financial entities such as banks, etc. Look at this comprehensively. Comprehensively, obviously, all financial entities had quite a long time to prepare, but large organizations seem to look at it more comprehensively and have done the proper assessment of whether or not their services are caught. But we are still getting quite a few questions in terms of whether or not DORA applies to a certain financial entity type. So I think there are quite a few organizations out there who are still trying to determine that. But once that's clear although DORA itself is quite a long kind of piece of legislation, in actual fact, it is further clarified in various regulatory technical standards and implementing technical standards, and they clarify all of the cybersecurity requirements that actually appear quite generic in DORA itself. So those RTS and ITS are quite lengthy documents and are all together around 1,000 pages. So that's where kind of the devil is in the detail there and organizations will find it may appear quite overwhelming. So I would start by assessing whether DORA applies, which services, which entities, which geographies. Once that's determined, it's important to identify whether financial entities' own services may be deemed ICT services, as I just explained earlier. The next step in my mind would be to check whether the services that are caught also support critical or important functions, and also when kind of making registries of third party ICT service providers, also making sure, kind of identifying those separately. And the reason is quite a few of the requirements, additional requirements applied to critical and important functions. For example, the incident reporting obligations and requirements in terms of contractual agreements. And then I would look at updating contracts, first of all, with important ICT service providers, then also checking if customer-facing contracts need to be updated if the financial entity is providing ICT services itself. And also not forgetting the intra-group ICT agreements where, for example, a parent company is providing data storage or word processing services to its affiliates in Europe. So they should be covered as well.  Catherine: If we were a smaller company or a company that interacts in the financial services sector, can we think of an example that might be helpful for people listening on how I could start? Maybe what's the example of a smaller or middle-sized company that would be subject to this? And then who would they be interacting with on the ICT side?  Asélle: Maybe an example of that could be an investment fund or a pensions provider. I think most of this compliance effort when it comes to DORA will be driven by in-house cybersecurity teams. So they will be updating their risk management and risk frameworks. But any updates to policies, whenever they have to be looked at, I think will need to be reviewed by legal and incident reporting policies, contract management policies, I don't think they depend on size. If there are ICT service providers supporting critical or important functions, additional requirements will apply regardless of whether you're a small or a large organization. It's just the measures will depend on what level of risk, say, certain ICT service provider presents. So if this internal cybersecurity team has kind of put, you know, all the risk, all the IST assets in buckets and all the third-party IST services in various buckets based on criticality, then that would make the job of legal and generally compliance much easier. However, what we're seeing right now is that all of that work is happening all at the same time in parallel as people are rushing to get compliance. So that will mean that there may be gaps and inconsistencies and I'm sure they can be patched later.  Catherine: Thank you for that. So just another follow-up question, maybe Christian can respond, would
In its first leading judgment (decision of November 18, 2024, docket no.: VI ZR 10/24), the German Federal Court of Justice (BGH) dealt with claims for non-material damages pursuant to Art. 82 GDPR following a scraping incident. According to the BGH, a proven loss of control or well-founded fear of misuse of the scraped data by third parties is sufficient to establish non-material damage. The BGH therefore bases its interpretation of the concept of damages on the case law of the CJEU, but does not provide a clear definition and leaves many questions unanswered. Our German data litigation lawyers, Andy Splittgerber, Hannah von Wickede and Johannes Berchtold, discuss this judgment and offer insights for organizations and platforms on what to expect in the future. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Andy: Hello, everyone, and welcome to today's episode of our Reed Smith Tech Law Talks podcast. In today's episode, we'll discuss the recent decision of the German Federal Court of Justice, the FCJ, of November 18, 2024, on compensation payments following a data breach or data scraping. My name is Andy Splittgerber. I'm partner at Reed Smith's Munich office in the Emerging Technologies Department. And I'm here today with Hannah von Wickede from our Frankfurt office. Hannah is also a specialist in data protection and data litigation. And Johannes Berchtold, also from Reed Smith in the Munich office, also from the emerging technologies team and tech litigator. Thanks for taking the time and diving a bit into this breathtaking case law. Just to catch everyone up and bring everyone on the same speed, it was a case decided by the German highest civil court, in an action brought by a user of a social platform who wanted damages after his personal data was scraped by a hacker from that social media network. And that was done through using the telephone number or trying out any kind of numbers through a technical fault probably, and this find a friend function. And through this way, the hackers could download a couple of million data sets from users of that platform, which then could be found in the dark web. And the user then started an action before the civil court claiming for damages. And this case was then referred to the highest court in Germany because of the legal difficulties. Hannah, do you want to briefly summarize the main legal findings and outcomes of this decision?  Hannah: Yes, Andy. So, the FCJ made three important statements, basically. First of all, the FCJ provided its own definition of what a non-material damage under Article 82 GDPR is. They are saying that mere loss of control can constitute a non-material damage under Article 82 GDPR. And if such a loss of the plaintiffs is not verifiable, that also justified fear of personal data being misused can constitute a non-material damage under GDPR. So both is pretty much in line with what the ECJ already has said about non-material damages in the past. And besides that, the FCJ makes also a statement regarding the amount of compensation for non-material damages following from scraping incident. And this is quite interesting because according to the FCJ, the amount of the claim for damages in such cases is around 100 euros. That is not much money. However, FCJ also says both loss of control and reasonable apprehension, also including the negative consequences, must first be proven by the plaintiff.  Andy: So we have an immaterial damage that's important for everyone to know. And the legal basis for the damage claim is Article 82 of the General Data Protection Regulation. So it's not German law, it's European law. And as you'd mentioned, Hannah, there was some ECJ case law in the past on similar cases. Johannes, can you give us a brief summary on what these rulings were about? And on your view, does the FCJ bring new aspects to these cases? Or is it very much in line with the European Court of Justice that already?  Johannes: Yes, the FCJ has quoted ECJ quite broadly here. So there was a little clarification in this regard. So far, it's been unclear whether the loss of control itself constitutes the damage or whether the loss of control is a mere negative consequence that may constitute non-material damage. So now the Federal Court of Justice ruled that the mere loss of control constitutes the direct damage. So there's no need for any particular fear or anxiety to be present for a claim to exist.  Andy: Okay, so it's not. So we read a bit in the press after the decision. Yes, it's very new and interesting judgment, but it's not revolutionary. It stays very close to what the European Court of Justice said already. The loss of control, I still struggle with. I mean, even if it's an immaterial damage, it's a bit difficult to grasp. And I would have hoped FCJ provides some more clarity or guidance on what they mean, because this is the central aspect, the loss of control. Johannes, you have some more details? What does the court say or how can we interpret that?  Johannes: Yeah, Andy, I totally agree. So in the future, discussion will most likely tend to focus on what actually constitutes a loss of control. So the FCJ does not provide any guidance here. However, it can already be said the plaintiff must have had the control over his data to actually lose it. So whether this is the case is particularly questionable if the actual scrape data was public, like in a lot of cases where we have in Germany right here, and or if the data was already included in other leaks, or the plaintiff published the data on another platform, maybe on his website or another social network where the data was freely accessible. So in the end, it will probably depend on the individual case if there was actually a loss of control or not. And we'll just have to wait on more judgments in Germany or in Europe to define loss of control in more detail.  Andy: Yeah, I think that's also a very important aspect of this case that was decided here, that the major cornerstones of the claim were established, they were proven. So it was undisputed that the claimant was a user of the network. It was undisputed that the scraping took place. It was undisputed that the user's data was affected part of the scraping. And then also the user's data was found in the dark web. So we have, in this case, when I say undistributed, it means that the parties did not dispute about it and the court could base their legal reasoning on these facts. In a lot of cases that we see in practice, these cornerstones are not established. They're very often disputed. Often you perhaps you don't even know that the claimant is user of that network. There's always dispute or often dispute around whether or not a scraping or a data breach took place or not. It's also not always the case that data is found in the dark web. I think this, even if the finding in the dark web, for example, is not like a written criteria of the loss of control. I think it definitely is an aspect for the courts to say, yes, there was loss of control because we see that the data was uncontrolled in the dark web. So, and that's a point, I don't know if any of you have views on this, also from the technical side. I mean, how easy and how often do we see that, you know, there is like a tag that it says, okay, the data in the dark web is from this social platform? Often, users are affected by multiple data breaches or scrapings, and then it's not possible to make this causal link between one specific scraping or data breach and then data being found somewhere in the web. Do you think, Hannah or Johannes, that this could be an important aspect in the future when courts determine the loss of control, that they also look into, you know, was there actually, you know, a loss of control?  Hannah: I would say yes, because it was already mentioned that the plaintiffs must first prove that there is a causal damage. And a lot of the plaintiffs are using various databases that list such alleged breaches, data breaches, and the plaintiffs always claim that this would indicate such a causal link. And of course, this is now a decisive point the courts have to handle, as it is a requirement. Before you get to the damage and before you can decide if there was a damage, if there was a loss of control, you have to prove if the plaintiff even was affected. And yeah, that's a challenge and not easy in practice because there's also a lot of case law already about these databases or on those databases that there might not be sufficient proof for the plaintiffs being affected by alleged data breaches or leaks.  Andy: All right. So let's see what's happening also in other countries. I mean, the Article 82, as I said in the beginning, is a European piece of law. So other countries in Europe will have to deal with the same topics. We cannot come up with our German requirements or interpretation of immaterial damages that are rather narrow, I would say. So Hannah, any other indications you see from the European angle that we need to have in mind?  Hannah: Yes, you're right. And yet first it is important that this concept of immaterial damage is EU law, is in accordance with EU law, as this is GDPR. And as Johannes said, the ECJ has always interpreted this damage very broadly. And does also not consider a threshold to be necessary. And I agree with you that it is difficult to set such low requirements for the concept of damage and at the same time not demand materiality or a threshold. And in my opinion, the Federal Court of Justice should perhaps have made a submission here to the ECJ after all because it i
Laura-May Scott and Emily McMahan navigate the intricate relationship between AI and professional liability insurance, offering valuable insights and practical advice for businesses in the AI era. Our hosts, both lawyers in Reed Smith’s Insurance Recovery Group in London, delve into AI’s transformative impact on the UK insurance market, focusing on professional liability insurance. AI is adding efficiency to tasks such as document review, legal research and due diligence, but who pays when AI fails? Laura-May and Emily share recommendations for businesses on integrating AI, including evaluating specific AI risks, maintaining human oversight and ensuring transparency. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Laura-May: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in the UK insurance market. I'm Laura-May Scott, a partner in our Insurance Recovery and Global Commercial Disputes group based here in our London office. Joining me today is Emily McMahan, a senior associate also in the Insurance Recovery and Global Commercial Disputes team from our London office. So diving right in, AI is transforming how we work and introducing new complexities in the provision of services. AI is undeniably reshaping professional services, and with that, the landscape of risk and liability. Specifically today, we're going to discuss how professional liability insurance is evolving to address AI-related risks, and what companies should be aware of as they incorporate AI into their operations and work product. Emily, can you start by giving our listeners a quick overview of professional liability insurance and how it intersects with this new AI-driven landscape? Emily: Thank you, Laura-May. So, professional liability insurance protects professionals, including solicitors, doctors, accountants, and consultants, for example, against claims brought by their clients in respect of alleged negligence or poor advice. This type of insurance helps professionals cover the legal costs of defending those claims, as well as any related damages or settlements associated with the claim. Before AI, professional liability insurance would protect professionals from traditional risks, like errors in judgment or omissions from advice. For example, if an accountant missed a filing deadline or a solicitor failed to supervise a junior lawyer, such that the firm provided incorrect advice on the law. However, as AI becomes increasingly utilized in professional services and in the delivery of services and advice to their clients, the traditional risks faced by these professionals is changing rapidly. This is because AI can significantly alter how services are delivered to clients. Indeed, it is also often the case that it is not readily apparent to the client that AI has been used in the delivery of some of these professional services. Laura-May: Thank you, Emily. I totally agree with that. Can you now please tell us how the landscape is changing? So how is AI being used in the various sectors to deliver services to clients? Emily: Well, in the legal sphere, AI is being used for tasks such as document review, legal research, and within the due diligence process. At first glance, this is quite impressive, as these are normally the most time-consuming aspects of a lawyer's work. So the fact that AI can assist with these tasks is really useful. Therefore, when it works well, it works really well and can save us a lot of time and costs. However, when the use of AI goes wrong, then it can cause real damage. For example, if it transpires that something has been missed in the due diligence process, or if the technology hallucinates or makes up results, then this can cause a significant problem. I know, for example, on the latter point in the US, there was a case where two New York lawyers were taken to court after using ChatGPT to write a legal brief that actually contained fake case citations. Furthermore, using AI poses a risk in the context of confidentiality, where personal data of clients is disclosed to the system or there's a data leak. So when it goes wrong, it can go really wrong. Laura-May: Yes, I can totally understand that. So basically, it all boils down to the question of who is responsible if AI gets something wrong? And I guess, will professional liability insurance be able to cover that? Emily: Yes, exactly. Does liability fall to the professionals who have been using the AI or the developers and providers of the AI? There's no clear-cut answer, but the client will probably no doubt look to the professional with whom they've contracted with and who owes them a duty of care, whether that be, for example, a law firm or an accountancy firm to cover any subsequent loss. In light of this, Laura-May, maybe you could tell our listeners what this means from an insurance perspective. Laura-May: Yes, it's an important question. So since many insurance policies were created before AI, they don't explicitly address AI related issues. For now, claims arising from AI are often managed on a case by case basis within the scope of existing policies, and it very much depends on the policy wording. For example, as UK law firms must obtain sufficient professional liability insurance to adequately cover its current and past services as mandated by its regulator, to the solicitor's regulatory authority, then it is likely that such policy will respond to claims where AI is used to perform and deliver services to clients and where a later claim for breach of duty arises in relation to that use of AI. Thus, a law firm's professional liability insurance could cover instances where AI is used to perform legal duties, giving rise to a claim from the client. And I think that's pretty similar for accountancy firms who are members of the Institute of Chartered accountants for England and Wales. So the risks associated with AI are likely to fall under the minimum terms and conditions for its required professional liability insurance, such that any claims brought against accountants for breach of duty in relation to the use of AI would be covered under the insurance policy. However, as time goes on, we can expect to see more specific terms addressing the use of AI in professional liability policies. Some policies might have that already, but I think as we go through the market, it will become more industry standard. And we recommend that businesses review their professional liability policy language to ascertain how it addresses AI risk. Emily: Thanks, Laura-May. That's really interesting that such a broad approach is being followed. I was wondering whether you would be able to tell our listeners how you think they should be reacting to this approach and preparing for any future developments. Laura-May: I would say the first step is that businesses should evaluate how AI is being integrated into their services. It starts with understanding the specific risks associated with the AI technologies that they are using and thinking through the possible consequences if something goes wrong with the AI product that's being utilised. The second thing concerns communication. So even if businesses are not coming across specific questions regarding the use of AI when they're renewing or placing professional liability cover, companies should always ensure that they're proactively updating their insurers about the tech that they are using to deliver their services. And that's to ensure that businesses discharge their obligation to give a fair presentation of the risk to insurers at the time of placement or on variation or renewal of the policy pursuant to the Insurance Act 2015. It's also practically important to disclose to insurers fully so that they understand how the business utilizes AI and you can then avoid coverage-related issues down the line if a claim does arise. Better to have that all dealt with up front. The third step is about human involvement and maintaining robust risk management processes for the use of AI. Businesses need to ensure that there is some human supervision with any tasks involving AI and that all of the output from the AI is thoroughly checked. So businesses should be adopting internal policies and frameworks to outline the permitted use of AI in the delivery of services by their business. And finally, I think it's very important to focus on transparency with clients. You know, clients should be informed if any AI tech has been used in the delivery of services. And indeed, some clients may say that they don't want the professional services provider to utilize AI in the delivery of services. And businesses must be familiar with any restrictions that have been put in place by a client. So in other words, informed consent for the use of AI should be obtained from the client where possible. I think these should collectively help, these steps should collectively help all parties begin to understand where the liability lies, Emily. Do you have anything to add? Emily: I see. So it's basically all about taking a proactive rather than a reactive attitude to this. Though times may be uncertain, companies should certainly be preparing for what is to come. In terms of anything to add, I would also just like to quickly mention that if a firm uses a third-party AI tool instead of its own tool, risk management can become a little more complex. This is because if a firm develops their own AI tool, they know how it works and the
Our latest podcast covers the legal and practical implications of AI-enhanced cyberattacks; the EU AI Act and other relevant regulations; and the best practices for designing, managing and responding to AI-related cyber risks. Partner Christian Leuthner in Frankfurt, partner Cynthia O'Donoghue in London with counsel Asélle Ibraimova share their insights and experience from advising clients across various sectors and jurisdictions. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Christian: Welcome to the Tech Law Talks, now new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI and cybersecurity threat. My name is Christian Leutner. I'm a partner at the Reed Smith Frankfurt office, and I'm with my colleagues Cynthia O'Donoghue and Asélle Ibraimova from the London office. Cynthia: Morning, Christian. Thanks. Asélle: Hi, Christian. Hi, Cynthia. Happy to be on this podcast with you. Christian: Great. In late April 2024, the German Federal Office for Information Security has identified that AI, and in particular, generative AI and large language models, LLM, is significantly lowering the barriers to entry for cyber attacks. The technology, so AI, enhances the scope, speed, and the impact of cyber attacks, of malicious activities, because it simplifies social engineering, and it really makes the creation or generation of malicious code faster, simpler, and accessible to almost everybody. The EU legislator had some attacks in mind when creating the AI Act. Cynthia, can you tell us a bit about what the EU regulator particularly saw as a threat? Cynthia: Sure, Christian. I'm going to start by saying there's a certain irony in the EU AI Act, which is that there's very little about the threat of AI, even though sprinkled throughout the EU AI Act is lots of discussion around security and keeping AI systems safe, particularly high-risk systems. But the EU AI Act contains a particular article that's focused on the design of high-risk systems and cybersecurity. And the main concern is really around the potential for data poisoning and for model poisoning. And so part of the principle behind the EU AI Act is security by design. And so the idea is that the EU AI Act regulates high-risk AI systems such that they need to be designed and developed in a way that ensures an appropriate level of accuracy robustness, and cybersecurity. And to prevent such things as data poisoning and model poisoning. And it also talks about the horizontal laws across the EU. So because the EU AI Act treats AI as a product, it brings into play other EU directives, like the Directive on the Resilience of Critical Entities and the newest cybersecurity regulation in relation to digital products. And I think when we think about AI, you know, most of our clients are concerned about the use of AI systems and being, let's say, ensuring that they're secure. But really, you know, based on that German study you mentioned at the beginning of the podcast, I think there's less attention paid to use of AI as a threat vector for cybersecurity attacks. So, Christian, what do you think is the relationship between the AI Act and the Cyber Resilience Act, for instance? Christian: Yeah, I think, and you mentioned it already. So the legislator thought there is a link and the high-risk AI models need to implement a lot of security measures. And the latest Cyber Resilience Act requires some stakeholders in software and hardware products to also implement security measures and also imposes another or lots of different obligations on them. To not over-engineer these requirements, the AI Act already takes into account that if a high-risk AI model is in scope of the Cyber Resilience Act, the providers of those AI models can refer to the implementation of the cybersecurity requirements they made under the Cyber Resilience Act. So they don't need to double their efforts. They can just rely on what they have implemented. But it would be great if we're not only applying the law, but if there would also be some guidance from public bodies or authorities on that. Asélle, do you have something in mind that might help us with implementing those requirements? Asélle: Yeah, so ENISA has been working on AI and cybersecurity in general, and it has produced a paper called Multi-Layer Framework for Good Cybersecurity Practices for AI last year. So it still needs to be updated. However, it does provide a very good summary of various AI initiatives throughout the world. And generally mentions that when thinking of AI, organizations need to take into consideration the general system vulnerabilities, the vulnerabilities in the underlying ICT infrastructure. And also when it comes to the use of AI models or systems, then, you know, various threats that you already talked about, such as data poisoning and model poisoning and other kind of adversarial attacks on those systems should also be taken into account. So in terms of specific kind of guidelines or standards that ENISA mentioned is ISO 4201. It's an AI management system standard. And also another noteworthy guidelines mentioned is the NIST AI risk management framework, obviously the US guidelines. And obviously both of these are to be used on a voluntary basis. But basically, their aim is to ensure developers create trustworthy AI, valid, reliable, safe, and secure and resilient. Christian: Okay, that's very helpful. I think it's fair to say that AI will increase the already high likelihood of being subject to cyber attack at some point, that this is a real threat to our clients. And we all know from practice that you cannot defend against everything. You can be cautious, but there might be occasions when you are subject to an attack, when there has been a successful attack or there is a cyber incident. If it is caused by AI, what do we need to do as a first responder, so to say? Cynthia: Well, there are numerous notification obligations in relation to attacks. Again, depending on the type of data or the entity involved. For instance, if the, As a result of a breach from an AI attack, it involves personal data, then there's notification requirements under the GDPR, for instance. If you're in a certain sector that's using AI, one of the newest pieces of legislation to go into effect in the EU, the Network and Information Security Directive, tiers organizations into essential entities and important entities. And, you know, depending on whether the sector the particular victim is in is subject to either, you know, the essential entity requirements or the important entity requirements, there's a notification obligation under NIST-2, for short, in relation to vulnerabilities and attacks. And ENISA, who Asélle was just talking about, has most recently issued a report for, let's say, network and other providers, which are essential entities under NIST-2, in relation to what is considered a significant or a vulnerability or a material event that would need to be notified to the regulatory entity and the relevant member state for that particular sector. And I'm sure there's other notification requirements. I mean, for instance, financial services are subject to a different regulation, aren't they, Asélle? And so why don't you tell us a bit more about the notification requirements of financial services organizations? Asélle: The EU Digital Operational Resilience Act also provides similar requirements to the supply chain of financial entities, specifically the ICT third-party providers, which the AI providers may fall into. And Article 30 under DORA requires that there are specific, for example, contractual clauses requiring cybersecurity around data. So it requires provisions on availability, authenticity, integrity, and confidentiality. There are additional requirements to those ICT providers whose product, say AI product, perhaps as an ICT product, plays a critical or important function in the provision of the financial services. In that case... There will be additional requirements, including on ICT security measures. So in practical terms, it would mean your organizations that are regulated in this way, they are likely to ask AI providers to have additional tools, policies, measures, and to provide evidence that such measures have been taken. It's also worth mentioning about the developments on AI regulation in the UK. Previous UK government wanted to adopt a flexible, non-binding regulation of AI. However, the Labour Government appears to want to adopt a binding instrument. However, it is likely to be of limited scope, focusing only on the most powerful AI models. However, there isn't any clarity in terms of whether the use of AI in cyber threats is regulated in any specific way. Christian, I wanted to direct a question to you. How about the use of AI in supply chains? Christian: Yeah, I think it's very important to have a look on the entire supply chain of the companies or the entire contractual relationships. Because most of our clients or companies out there do not develop or create their own AI. They will use AI from vendors or their suppliers or vendors will use AI products to be more efficient. And all the requirements, for example, the notification requirements that Cynthia just mentioned, they do not stop if you use a third party. So even if you engage a supplier, a vendor, you're still responsible to defend against cyber attacks and to report cyber incidents or attacks if they concern your company. Or at least there's a high likelihood. So it's very cruc
Reed Smith lawyers Cheryl Yu (Hong Kong) and Barbara Li (Beijing) explore the latest developments in AI regulation and litigation in China. They discuss key compliance requirements and challenges for AI service providers and users, as well as the emerging case law on copyright protection and liability of AI-generated content. They also share tips and insights on how to navigate the complex and evolving AI legal landscape in China. Tune in to learn more about China’s distinct approach to issues involving AI, data and the law.  ----more---- Transcript:  Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Cheryl: Welcome to our Tech Law Talks and new series on artificial intelligence. Over the months, we have been exploring the key challenges and opportunities within the rapidly involving AI landscape. Today, we will focus on AI regulations in China and the relevant PRC court decisions. My name is Cheryl Yu, a partner in the Hong Kong office at Reed Smith, and I'm speaking today with Barbara Li, who is a partner based in our Beijing office. Barbara and I are going to focus on the major legal regulations on AI in China and also some court decisions relating to AI tours to see how China's legal landscape is evolving to keep up with the technological advancements. Barbara, can you first give us an overview about China's AI regulatory developments? Barbara: Sure. Thank you, Cheryl. Very happy to do that. In the past few years, the regulatory landscape governing AI in China has been evolving at a very fast pace. Although China does not have a comprehensive AI as a EU AI act, China has been leading the way in rolling out multiple AI regulations governing generative AI, debate technologies, and algorithms. In July 2023, China issued the Generative AI Measures, which becomes one of the first countries in the world to regulate generative AI technologies. These measures apply to generative AI services offered to the public in China, regardless of whether the service provider is based in China or outside China. And international investors are allowed to set up local entities in China to develop and offer AI services in China. In relation to the legal obligation, the measures lay down a wide range of legal requirements in performing and using generative AI services. Including content screening, protection of personal data and privacy, and safeguarding IPR and trade secrets, and also taking effective measures to prevent discrimination, when the company's design algorithm chooses a training data or creates a large language model. Cheryl: Many thanks, Barbara. These are the very important compliance obligations that business should not neglect when engaging in development of AI technologies, products, and services. I understand that one of the biggest concerns in AI is how to avoid hallucination and misinformation. I wonder if China has adopted any regulations to address these issues? Barbara: Oh, yes, definitely, Cheryl. China has adopted multiple regulations and guidelines to address these concerns. For example, the Deep Synthesis Rule, which became effective from January 2023, and this regulation aims to have a governance over the use of deep-fake technologies in generating or changing digital content. And when we talk about digital content, the regulation refers to a wide range of digital media, including video, voices, text, and images. And the deep synthesis service providers, they must refrain from using deep synthesis of services to produce or disseminate illegal information. And also, the companies are required to establish and improve proper compliance or risk management systems. Such as having the user registration system, doing the ethics review of the algorithm, and also protecting personal information, and also taking measures to protect IT and also prevent misinformation and fraud, and also, last but not least, setting up a response to the data breach. In addition, China's National Data and Cybersecurity Regulator, which is CAC, have issued a wide range of rules on algorithm fighting. And also, these algorithm fighting requirements have become effective from June 2024. According to this 2024 regulation, if a company uses algorithms in its online services with the functions of blogs, chat rooms, public accounts, short videos, or online streaming, So these staff functions are required of being capable of influencing public opinion or driving social engagement. And then the service provider is required to file its algorithm with the CAC, the regulator, within 10 working days after the launch of the service. So in order to finish the algorithm filing, the company is required to put together a comprehensive information documentation. Those information and documentation include the algorithm assessment report, security monitoring policy, data breach response plan, and also some technical documentation to explain the function of the algorithm. And also, the CAC has periodically published a list of filed algorithms, and also up to 30th of June 2024, we have seen over 1,400 AI algorithms which have been developed by more than 450 companies, and those algorithms have been successfully filed by the CAC. So you can see this large number of AI algorithm findings indeed have highlighted the rapid development of AI technologies in China. And also, we should remember that the large volume of data is a backbone of AI technologies. So we should not forget about the importance of data protection and privacy obligations when you develop and use AI technologies. Over the years, China has built up a comprehensive data and privacy regime with the three pillars of national laws. Those laws include the Personal Information Protection Law, normally in short name PIPL, and also the Cybersecurity Law and Data Security Law. So the data protection and cybersecurity compliance requirements got to be properly addressed when companies develop AI technologies, products, and services in China. And indeed, there are some very complicated data requirements and issues under the Chinese data and cybersecurity laws. For example, how to address the cross-border data transfer. So it's very important to remember those requirements. China data requirement and the legal regime is very complex. So given the time constraints, probably we can find another time to specifically talk about the data issues under the Chinese. Cheryl: Thanks, Barbara. Indeed, there are some quite significant AI and data issues which would warrant more time for a deeper dive. Barbara, can you also give us some update on the AI enforcement status in China and share with us your views on the best practice that companies can take in mitigating those risks? Barbara: Yes, thanks, Cheryl. Indeed, Chinese AI regulations do have keys. For example, the violation of the algorithm fighting requirement can result in fines up to RMB 100,000. And also the failure to comply with those compliance requirements in developing and using technologies can also trigger the legal liability under the Chinese PIPL, which is Personal Information Protection Law, and also the cyber security law and the data security law. And under those laws, a company can be imposed a monetary fine up to RMB 15 million or 5% of its last year turnover. In addition, the senior executives of the company can be personally subject to liability, such as a penalty up to a fine up to 1 million RMB, and also the senior executives can be barred from taking senior roles for a period of time. In the worst scenario, criminal liability can be pursued. So, in the first and second quarters of this year, 2024, we have seen some companies have been caught by the Chinese regulators for failing to comply with the AI requirements, ranging from failure to monitor the AI-generated content or neglecting the AI algorithm-finding requirements. Noncompliance has resulted in the suspension of their mobile apps pending ratification. As you can see, that noncompliance risk is indeed real, so it's very important for the businesses to pay close attention to the relevant compliance requirements. So to just give our audience a few quick takeaways in terms of how to address the AI regulatory and legal risk in China, we would say probably the companies can consider three most important compliance steps. The first is that with the faster development of AI in China, it's crucial to closely monitor the legislative and enforcement development in AI, data protection, and cybersecurity. security. While the Chinese AI and data laws share some similarities with the laws in other countries, for example, the EU AIF and the European GDPR, Chinese AI and data laws and regulations indeed have its unique characteristics and requirements. So it's extremely important for businesses to understand the Chinese AI and data laws, conduct proper analysis of the key business implications. And also take appropriate compliance action. So that is number one. And the second one, I would say, in terms of your specific AI technologies, products and services rolling out in the China market, it's very important to do the required impact assessment to ensure compliance with accountability, bias, and also accessibility requirements, and also build up a proper system for content monitoring. If your algorithm falls within the scope subject to fighting requirements, you definitely need to prepare the required documents and finish the algorithm fighting as soon as possible to avoid the potential penalties and compliance rates. And the third one is that you should definitely prepare the China AI policies, the AI terms of use, and build up your AI governanc
Reed Smith partners Claude Brown and Romin Dabir discuss the challenges and opportunities of artificial intelligence in the financial services sector. They cover the regulatory, liability, competition and operational risks of using AI, as well as the potential benefits for compliance, customer service and financial inclusion. They also explore the strategic decisions firms need to make regarding the development and deployment of AI, and the role of regulators play in supervising and embracing AI. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Claude: Welcome to Tech Law Talks and our new series on artificial intelligence, or AI. Over the coming months, we'll explore key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in financial services. And to do that, I'm here. My name is Claude Brown. I'm a partner in Reed Smith in London in the Financial Industry Group. And I'm joined by my colleague, Romin Dabir, who's a financial services partner, also based in London.  Romin: Thank you, Claude. Good to be with everyone.  Claude: I mean, I suppose, Romin, one of the things that strikes me about AI and financial services is it's already here. It's not something that's coming in. It's been established for a while. We may not have called it AI, but many aspects it is. And perhaps it might be helpful to sort of just review where we're seeing AI already within the financial services sector.  Romin: Yeah, absolutely. No, you're completely right, Claude. Firms have been using AI or machine learning or some form of automation in their processes for quite a while, as you rightly say. And this has been mainly driven by searches for efficiency, cost savings, as I'm sure the audience would appreciate. There have been pressures on margins and financial services for some time. So firms have really sought to make their processes, particularly those that are repetitive and high volume, as efficient as possible. And parts of their business, which AI has already impacted, include things like KYC, AML checks, back office operations. All of those things are already having AI applied to them.  Claude: Right. I mean, some of these things sound like a good thing. I mean, improving customer services, being more efficient in the know-your-customer, anti-money laundering, KYC, AML areas. I suppose robo-advice, as it's called sometimes, or sort of asset management, portfolio management advice, might be an area where one might worry. But I mean, the general impression I have is that the regulators are very focused on AI. And generally, when one reads the press, you see it being more the issues relating to AI rather than the benefits. I mean, I'm sure the regulators do recognize the benefits, but they're always saying, be aware, be careful, we want to understand better. Why do you think that is? Why do you think there's areas of concern, given the good that could come out of AI?  Romin: No, that's a good question. I think regulators feel a little bit nervous when confronted by AI because obviously it's novel, it's something new, well, relatively new that they are still trying to understand fully and get their arms around. And there are issues that arise where AI is applied to new areas. So, for example, you give the example of robo-advice or portfolio management. Now, these were activities that traditionally have been undertaken by people. And when advice or investment decisions are made by people, it's much easier for regulators to understand and to hold somebody accountable for that. But when AI is involved, responsibility sometimes becomes a little bit murkier and a little bit more diffuse. So, for example, you might have a regulated firm that is using software or AI that has been developed by a specialist software developer. And that software is able to effectively operate with minimal human intervention, which is really one of the main drivers behind it, behind the adoption of AI, because obviously it costs less, it is less resource intensive in terms of skilled people to operate it. It but under those circumstances who has the regulatory responsibility there is it the software provider who makes the algorithm programs the software etc etc and then the software goes off and makes decisions or provides the advice or is it the firm who's actually running the software on its systems when it hasn't actually developed that software? So there are some naughty problems i think that regulators are are still mulling through and working out what they think the right answers should be.  Claude: Yeah I can see that because I suppose historically the the classic model certainly in the UK has been the regulator say if you want to outsource something thing. You, the regulated entity, be you a broker or asset manager or a bank, you are, or an investment firm, you are the authorized entity, you're responsible for your outsourcer or your outsource provider. But I can see with AI, that must get a harder question to determine, you know, because say your example, if the AI is performing some sort of advisory service, you know, has the perimeter gone beyond the historically regulated entity and does it then start to impact on the software provider. That's sort of one point and you know how do you allocate that responsibility you know that strict bright line you want to give it to a third party provider it's your responsibility. How do you allocate that responsibility between the two entities even outside the regulator's oversight, there's got to be an allocation of liability and responsibility.  Romin: Absolutely. And as you say, with traditional outsource services, it's relatively easy for the firm to oversee the activities of the outsource services provider. It can get MI, it can have systems and controls, it can randomly check on how the outsource provider is conducting the services. But with something that's quite black box, like some algorithm, trading algorithm for portfolio management, for example, it's much harder for the firm to demonstrate that oversight. It may not have the internal resources. How does it really go about doing that? So I think these questions become more difficult. And I suppose the other thing that makes it more difficult with AI to the traditional outsourcing model, even the black box algorithms, is by and large they're static. You know, whatever it does, it keeps on doing. It doesn't evolve by its own processes, which AI does. So it doesn't matter really whether it's outsourced or it's in-house to the regulated entity. That thing's sort of changing all the time and supervising it is a dynamic process and the speed at which it learns which is in part driven by its usage means that the dynamics of its oversight must be able to respond to the speed of it evolving.  Romin: Absolutely and and you're right to highlight all of the sort of liability issues that arise, not just simply vis-a-vis sort of liabilities to the regulator for performing the services in compliance with the regulatory duties, but also to clients themselves. Because if the algo goes haywire and suddenly, you know, loses customers loads of money or starts making trades that were not within the investment mandate provided by the client where does the buck stop is that with the firm is that with the person who who provided the software it's it's all you know a little difficult.  Claude: I suppose the other issue is at the moment there's a limited number of outsourced providers and. One might reasonably expect competition being what it is for that to proliferate over time but until it does I would imagine there's a sort of competition issue a not only a competition issue in one system gaining a monopoly but that particular form of large model learning then starts to dominate and produce, for want of a better phrase, a groupthink. And I suppose one of the things that puzzles me is, is there a possibility that you get a systemic risk by the alignment of the thinking of various financial institutions using the same or a similar system of AI processes, which then start to produce a common result? And then possibly producing a common misconception, which introduces a sort of black swan event that was anticipated.  Romin: And sort of self-reinforcing feedback loops. I mean, there was the story of the flash crash that occurred with all these algorithmic trading firms all of a sudden reacting to the same event and all placing sell orders at the same time, which created a market disturbance. That was a number of years ago now. You can imagine such effects as AI becomes more prevalent, potentially being even more severe in the future.  Claude: Yeah, no, I think that's, again, an issue that regulators do worry about from time to time.  Romin: And I think another point, as you say, is competition. Historically, asset managers have differentiated themselves on the basis of the quality of their portfolio managers and the returns that they deliver to clients, etc. But here in a world where we have a number of software providers, maybe one or two of which become really dominant, lots of firms are employing technology provided by these firms, differentiating becomes more difficult in those circumstances.  Claude: Yeah and I guess to unpack that a little bit you know as you say portfolio managers have distinguished themselves by better returns than the competition and certainly better returns than the market average and that then points to the quality of their research and their analytics. So then i suppose the question becomes to what extent is AI being used to pro
This episode highlights the new benefits, risks and impacts on operations that artificial intelligence is bringing to the transportation industry. Reed Smith transportation industry lawyers Han Deng and Oliver Beiersdorf explain how AI can improve sustainability in shipping and aviation by optimizing routes and reducing fuel consumption. They emphasize AI’s potential contributions from a safety standpoint as well, but they remain wary of risks from cyberattacks, inaccurate data outputs and other threats. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Han: Hello, everyone. Welcome to our new series on AI. Over the coming months, we will explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, my colleague Oliver and I will focus on AI in shipping and aviation. My name is Han Deng, a partner in the transportation industry group in New York, focusing on the shipping industry. So AI and machine learning have the potential to transform the transportation industry. What do you think about that, Oliver?  Oliver: Thanks, Han, and it's great to join you. My name is Oliver Beiersdorf. I'm a partner in our transportation industry group here at Reed Smith, and it's a pleasure to be here. I'm going to focus a little bit on the aviation sector. And in aviation, AI is really contributing to a wide spectrum of value opportunities, including enhancing efficiency, as well as safety-critical applications. But we're still in the early stages. The full potential of AI within the aviation sector is far from being harnessed. For instance, there's huge potential for use in areas which will reduce human workload or increase human capabilities in very complex scenarios in aviation.  Han: Yeah, and there's similar potential within the shipping industry with platforms designed to enhance collision avoidance, route optimization, and sustainability efforts. In fact, AI is predicted to contribute $2.5 trillion to the global economy by 2030.  Oliver: Yeah, that is a lot of money, and it may even be more than that. But with that economic potential, of course, also comes substantial risks. And AI users and operators and industries now getting into using AI have to take preventative steps to avoid cyber security attacks. Inaccurate data outputs, and other threats.  Han: Yeah, and at Reed Smith, we help our clients to understand how AI may affect their operations, as well as how AI may be utilized to maximize potential while avoiding its pitfalls and legal risks. During this seminar, we will highlight elements within the transportation industry that stand to benefit significantly from AI.  Oliver: Yeah, so a couple of topics that we want to discuss here in the next section, and there's really three of them which overlap between shipping and aviation in terms of the use of AI. And those topics are sustainability, safety, and business efficiency with the use of AI. In terms of sustainability, across both sectors, AI can help with route optimization, which saves on fuel and thus enhances sustainability.  Han: AI can make a significant difference in sustainability across the whole of the transportation industry by decreasing emissions. For example, within the shipping sector, emerging tech companies are developing systems that can directly link the information generated about direction and speed to a ship's propulsion system for autonomous regulation. AI also has the potential to create optimized routes using sensors that track and analyze real-time and variable factors such as wind speed and current. AI can determine both the ideal route and speed for a specific ship at any point in the ocean to maximize efficiency efficiency and minimize fuel usage.  Oliver: So you can see the same kind of potential in the aviation sector. For example, AI has the potential to assist with optimizing flight trajectories, including creating so-called green routes and increasing prediction accuracy. AI can also provide key decision makers and experts with new features that could transform air traffic management in terms of new technologies and operating procedures and creating greater efficiencies. Aside from reducing emissions, these advances have the potential to offer big savings in energy costs, which, of course, is a major factor for airlines and other players in the industry, with the cost of gas being a major factor in their budgets, and in particular, jet fuel for airlines. So advances here really have the potential to offer big savings that will enable both sectors to enhance profitability while decreasing reliance on fossil fuels.  Han: I totally agree. And further, you know, in terms of safety. AI can be used with the transportation industry to assist with safety assessment and management by identifying, managing, and predicting various safety risks.  Oliver: Right. So, in the aviation sector, AI has the potential to increase safety by driving the development of new air traffic management systems to maintain distances from aircraft. Planning safer routes, assisting in approaches to busy airports, And the development of new conflict detection, traffic advisories, and resolution tools, along with cyber resilience. What we're seeing, of course, in aviation, and there's a lot of discussion about, is the use of drones and EV tools, so electronic, vertical, takeoff, and landing aircraft. All of which add more complexity to the existing use of airspace. And you're seeing many players in the industry, including retailers who deliver products, using eVTOLs and drones to deliver product. And AI can be a useful assistant, that is, to ATM actors from planning, to operations, and really across all airspace users. It can benefit airline operators as well, who depend on predictable routine routes and services by using aviation data to predict air traffic management more accurately.  Han: That's fascinating, Oliver. Same within the shipping sector, for example, AI has the capacity to create 3D models for areas and use those models to simulate the impact of disruptions that may arise. AI can also enhance safety features through the use of vision sensors that can respond to ship traffic and prevent accidents. As AI begins to be able to deliver innovative responses that enhance predictability and resilience of the traffic management system, efficiency will increase productivity and enhance use of scarce resources like airspace, runways, and stuff.  Oliver: Yeah. So it'll be really interesting to follow, you know, how this develops. It's all still very new. Another area where you're going to see the use of AI, and we already are, is in terms of business efficiency, again, in both the shipping and aviation sectors. There's really a lot of potential for AI, including in generating data and cumulative reports based on real-time information. And by increasing the speed by which the information is processed, companies can identify issues early on and perform predictive maintenance to minimize disruptions. The ability to generate reports is also going to be useful in ensuring compliance with regulations and also coordinating work with contractors, vendors, partners, such as code share partners in commercial aviation and other stakeholders in the industry.  Han: Yeah, and AI can be used to perform comprehensive audits to ensure that all cargo is present and that it complies with contracts, local and national regulation, which can help identify any discrepancies quickly and lead to swift resolution. AI can also be used to generate reports based on this information to provide autonomous communication within contractors about cargo location and the estimated time of arrival. Increasing communication and visibility in order to inspire trust and confidence. Aside from compliance, these reports will also be useful in ensuring efficiencies in management and business development and strategy by performing predictive analytics in various areas, such as demand forecasting.  Oliver: And despite all these benefits, of course, as with any new technology, you need to weigh that against the potential risk and various things that can happen by using AI. So let's talk a little bit about cybersecurity and regulation being unable to keep pace with technology development, inaccurate data, and industry fragmentation. Things are just happening so fast that there's a huge risk. Associated with the use of artificial intelligence in many areas, but also in the transportation industry, including as a result of cybersecurity attacks. Data security breaches can affect airline operators or can also occur on vessels, in port operations, and in undersea infrastructure. Cyber criminals who are becoming more and more sophisticated can even manipulate data inputs, causing AI platforms on vessels to misidentify malicious maritime activity as legitimate trade or safe. Actors using AI are going to need to ensure the cyber safety of AI-enabled systems. I mean, that's a focus in both shipping and aviation and in other industries. Businesses and air traffic providers need to ensure that AI-enabled applications have robust cybersecurity elements built into their operational and maintenance schedules. Shipping companies will need to update their current cybersecurity systems and risk assessment plans to develop these threats and comply with relevant data and privacy laws. A real recent example is the CrowdStrike software outage on July 19th, which really affected almost every industry. But we saw it being particularly acute in the aviation industry and commercial aviation with literally thousands of flights being cancel
Emerging technology lawyers Therese Craparo, Anthony Diana and Howard Womersley Smith discuss the rapid advancements in AI in the financial services industry. AI systems have much to offer but most bank compliance departments cannot keep up with the pace of integration. The speakers explain: If financial institutions turn to outside vendors to implement AI systems, they must work to achieve effective risk management that extends out to third-party vendors.   ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Therese: Hello, everyone. Welcome to Tech Law Talks and our series on AI. Over the coming months, We'll be exploring the key challenges and opportunities within the rapidly evolving AI landscape. And today we'll be focusing on AI in banking and the specific challenges we're seeing in the financial services industry and how the financial services industry are approaching those types of challenges with AI. My name is Therese Craparo. I am a partner in our Emerging Technologies Group here at Reed Smith, and I will let my colleagues on this podcast introduce themselves. Anthony?  Anthony: Hey, this is Anthony Diana, partner in the New York office of Reed Smith, also part of the Emerging Technologies Group, and also, for today's podcast, importantly, I'm part of the Bank Tech Group.  Howard: Hello, everyone. My name is Howard Womersley Smith. I'm a partner in the Emerging Technologies Group at Reed Smith in London. As Anthony says, I'm also part of the Bank Tech Group. So back to you, Therese.  Therese: All right. So just to start out, what are the current developments or challenges that you all are seeing with AI in the financial services industry?  Anthony: Well, I'll start. I think a few things. Number one, I think we've seen that the financial services industry is definitely all in on AI, right? I mean, there's definitely a movement in the financial services industry. All the consultants have said this, that this is one of the areas where they expect AI, including gender of AI, to really have an impact. And I think that's one of the things that we're seeing is there's a tremendous amount of pressure from the legal and compliance departments because the businesses are really pushing to be AI forward and really focusing on AI. So one of the challenges is that this is here. It’s now. It's not something you can plan for. I think half of what we're seeing is AI tools are coming out frequently, sometimes not even with the knowledge of legal compliance, sometimes with knowledge of the business, where because it's in the cloud, they just put in an AI feature. So that is one of the challenges that we're dealing with right now, which is catch up. Things are moving really quickly, and then people are trying to catch up to make sure that they're compliant with whatever regs that are out there. Howard?  Howard: I agree with that. I think that banks are all in with the AI hype cycle, and I certainly think it is a hype cycle. I think that generally the sector is at the same pace, and at the moment we're looking at an uptick of interest and procurement of AI systems into the infrastructure of banks. I think that, you know, from the perspective of, you know, what the development phase is, I think we are just looking at the stage where they are buying in AI. We are beyond the look and see, the sourcing phase, looking at the buying phase and the impingement of AI into those banks. And, you know, what are the challenges there for? Well, challenges are twofold. One, it's from an existential perspective. Banks are looking to increase shareholder value, and they are looking to drive down costs, help, and we've seen that too with dependency technology that banks have had over the past 15 or more years. AI is an advantage of that, and it's an ability for banks to impose more automation within their organizations and less focus on humans and personnel. And we'll talk a bit more about what that involves and the risks, particularly, that could be created from relying solely on technology and not involving humans, which some proponents of AI anticipate.  Therese: And I think what's interesting, just picking up on what both of you are saying, in terms of how those things come together, including from a regulatory perspective, is that historically the financial industry has used variations of AI in a lot of different ways for trading analysis, for data analysis and the like. Like, so it's not, the concept of AI is not unheard of in the financial services industry, but I do think is interesting to talk about Howard talking about the hype cycle around generative AI. That's what's throwing kind of a wrench in the process, not just for traditional controls around, you know, AI modeling and the like, but also for business use, right? Because, you know, as Howard's saying, the focus is currently is how do we use all of these generative AI tools to improve efficiencies, to save costs, to improve business operations, which is different than the use cases that we've seen in the past. And at the same time, Anthony, as you're saying, it's coming out so quickly and so fast. The development is so fast, relatively speaking. The variety of use cases is coming across so broad in a way that it hasn't than before. And the challenges that we're seeing is that the regulatory landscape, as usual with technology, isn't really keeping up. We've got guidance coming from, you know, various regulators in the U.S. The SEC has issued guidance. FINRA has issued guidance. The CFPB has issued guidance. And all of their focus is a little bit different in terms of their concerns, right? There's concerns about ethical use and the use with consumers and the accuracy and transparency and the like. But there's concerns about disclosure and appropriate due diligence and understanding of the AI that's being used. And then there's concerns about what data it's being used on and the use of AI on highly confidential information like MNPI, like CSI, like consumer data and the like. And none of it is consolidated or clear. And that's in part because the regulators are trying to keep up. And they do tend not to want to issue strict guidance on technology as it's developing, right, because they're still trying to figure out what the appropriate use is. So we have this sort of confluence of brand new use cases, democratization, the ability to, you know, extend the use of AI very broadly to users, and then the speed of development that I think the financial services industry is struggling to keep up with themselves.  Anthony: Yeah, and I think the regulators have been pretty clear on that point. Again, they're not giving specific guidance, I would say, but they say two of the things that they most are concerned with is like the AI washing, which is, and they've already done some finds where if you tout you're using AI, you know, for trading strategies or whatever, and you're not, that you're going to get dinged. So that's obviously going to be part of whatever financial services due diligence you're going to be doing on a product, like making sure that actually is AI is going to be important, because that's something the regulators care about. And then the other thing, as you said, is it's the sensitive information, whether it's material, non-public information. I expect, like you said, the confidential supervisory information, any AI touching on those things is going to be highly sensitive. And I think, you know, one of the challenges that most financial institutions have is they don't know where all this data is, right? Or they don't have controls around that data. So I think that's, you know, again, that's part of the challenge is as much as they're, you know, every financial institution is going out there saying, we're going to be leveraging AI extensively. And whether they are or not remains to be seen. There is potential regulatory issues with saying that and not actually doing it, which is, I think, somewhat new. And I think just, again, as we sort of talked about this, is are the financial institutions really prepared for this level of change that's going on? And I think that's one of the challenges that we're seeing, is that, in essence, they're not built for this, right? And Howard, you're seeing it on the procurement side a lot as they're starting to purchase this. Therese and I are seeing it on the governance side as they try to implement this, and they're just not ready because of the risks involved to actually fully implement or use some of these technologies.  Therese: So then what are they doing? What do we see the financial services industry doing to kind of approach the management governance of AI in the current environment?  Howard: Well, I can answer that from an operational perspective before we go into a government's perspective. From an operational perspective, it's what Anthony was alluding to, which is banks cannot keep up with the pace of innovation. And therefore, they need to look out into the market for technological solutions that advance them over their competitors. And when they're all looking at AI, they're all clambering over each other to look at the best solutions to procure and implement into their organizations. We're seeing a lot of interest from banks at buying AI systems from third-party providers. From a regulatory landscape, that draws in a lot of concern because there are existing regulations in the US, in the UK and EU around how you control your supply chain and make sure that you manage your organization responsibly and faithfully with adequate risk management systems, whic
Regulatory lawyers Cynthia O’Donoghue and Wim Vandenberghe explore the European Union’s newly promulgated AI Act; namely, its implications for medical device manufacturers. They examine amazing new opportunities being created by AI, but they also warn that medical-device researchers and manufacturers have special responsibilities if they use AI to discover new products and care protocols. Join us for an insightful conversation on AI’s impact on health care regulation in the EU.  ----more---- Transcript:  Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday. Cynthia: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI and life sciences, particularly medical devices. I'm Cynthia O’Donoghue. I'm a Reed Smith partner in the London office in our emerging technology team. And I'm here today with Wim Vandenberghe. Wim, do you want to introduce yourself? Wim: Sure, Cynthia. I'm Wim Vandenberghe, I'm a life science partner out of the Brussels office, and my practice is really about regulatory and commercial contracting in the life science space. Cynthia: Thanks, Wim. As I mentioned, we're here to talk about the EU AI Act that came into force on the 2nd of August, and it has various phases for when different aspects come into force. But I think a key thing for the life sciences industry and any developer or deployer of AI is that research and development activity is exempt from the EU AI Act. And the reason it was done is because the EU wanted to foster research and innovation and development. But the headline sounds great. If, as a result of research and development, that AI product is going to be placed on the EU market and developed, essentially sold or used in products in the EU, it does become regulated under the EU AI Act. And there seems to be a lot of talk about interplay between the EU AI Act and various other EU laws. So Wim, how does the AI Act interplay with the medical devices regulation, the MDR and the IVDR? Wim: That's a good point, Cynthia. And that's, of course, you know, where a lot of the medical device companies are looking at kind of like that interplay and potential overlap between the AI Act on the one hand, which is a cross-sectoral piece of legislation. So it applies to all sorts of products and services, whereas the MDR and the IVDR are of course only applicable to medical technologies. So in summary, you know, the medical, both the AI Act and the MDR and IVDR will apply to AI systems, provided, of course, that those AI systems are in scope of the respective legislation. So maybe I'll start with the MDR and IVDR and then kind of turn to the AI Act. Under the MDR and the IVDR, of course, there's many AI solutions that are either considered to be a software as a medical device in their own right, or they are part or component of a medical technology. So to the extent that this AI system as software meets the definition of a medical device under the MDR or under the IVDR, it would actually qualify as a medical device. And therefore, the MDR and IVDR is fully applicable to those AI solutions. Stating the obvious, you know, there's plenty of AI solutions that are already now on the market and being used in a healthcare setting as well. Well, what the AI Act kind of focuses on, particularly with regard to medical technology, is the so-called high-class risk AI systems. And for a medical technology to be a high-class AI system under the AI Act, it's essentially it's a twofold kind of criteria that needs to apply. First of all, the AI solution needs to be a medical device or an in vitro diagnostic under the sector legislation, so the MDR or the IVDR, or it is a safety component of such a medical product. Safety component is not really explained in the AI Act, but think about, for example, the failure of an AI system to interpret diagnostic IVD instrument data, for example, that could endanger the health of a person by generating false positives. That would be a safety component. So that's the first step you have to see is the AI solution, does it qualify as a medical device or is it a safety component of a medical device? And the second step is that it is only for AI solution that are actually undergoing a conformity assessment by a notified body under the MDR or the IVDR. So to make kind of a long story short, it actually means that medical devices that are either a class 2A, 2B, or 3 will be in the scope of the AI Act. And for the IVDR, for in vitro diagnostics, that would be class B to D. The risk class, that would be then captured by the AI Act. So that essentially is kind of like determining the scope and the applicability of the AI Act. And Cynthia, maybe coming back to an earlier point of what you said on research, I mean, the other kind of curious thing as well that the AI Act doesn't really kind of foresee is the fact that, of course, you know, for getting an approved medical device, you need to do certain clinical investigations and studies on that medical device. So you really have to kind of test it in a real world setting. And that happens by a clinical trial, clinical investigation. The MDR and the IVDR have elaborate kind of rules about that. And the very fact that you do this prior to getting your CE mark and your approval and then launching it on the market is very standard under the MDR and the IVDR. However, under the AI Act, which also requires CE marking and approval, and we'll come to that a little bit later, there's no mentioning about such clinical and performance evaluation of medical technology. So if you would just read the AI Act like that, it would mean actually that you need to have a CE mark for such a high-risk AI system, and only then you can do your clinical assessment. And of course, that wouldn't be consistent with the MDR and the IVDR. And we can talk a little bit later about consistency between the two frameworks as well. You know, the one thing that I do see as being very new under the AI Act is everything to do around data and data governance. And I'm just, you know, kind of question, Cynthia, you know, given your experience, you know, if you can maybe talk a little bit about, you know, what are the requirements going to be for data and data governance under the AI Act? Cynthia: Thanks, Wim. Well, the AI Act obviously defers to the GDPR, and the GDPR, which regulates how data is used and transferred outside within the EEA member states and then transferred outside the EEA, all has to interoperate with the EU AI Act. In the same way as you were just saying that the MDR, the IVDR needs to interoperate, and you touched, of course, on clinical trials, so the clinical trial regulation would also have to work and interoperate with the EU AI Act. Obviously, if you're working with medical devices, most of the time it's going to involve personal data and what is called sensitive, a special category, data concerning health about patients or participants in a clinical trial. So, you know, a key part of AI is that training data. And so the data that goes in, that's ingested into the AI system for purposes of a clinical or for a medical device needs to be as accurate as possible. And obviously the GDPR also includes a data minimization principle. So the data needs to be the minimum necessary, but at the same time. You know, that training data, you know, depending on the situation in a clinical trial might be more controlled. But once a product is put into the market, there could be data that's ingested into the AI system that has anomalies in it. You know, you mentioned about false positives, but there's also a requirement under the AI to ensure that the ethical principles in AI, which was non-binding by the EU, are adhered to. And one of those is human oversight. So obviously, if there's anomalies in the data and the outputs from the AI would give false positives or create other issues with the output that EU AI Act requires once a CE mark is obtained, just like the MDR does, for there to be a constant conformity assessment to ensure that any kind of anomalies and or the necessity for human intervention is met. Is done on a regular basis as part of reviewing the AI system itself. So we've talked about high-risk AI. We've talked a little bit about the overlap between the GDPR and the EU AI Act and the MDR and the IBDR overlap and interplay. Let's talk about some real-world examples, for instance. I mean, the EU AI Act also classes education as potentially high risk if any kind of vocational training is based solely on assessment by an AI system. How does that potentially work with the way medical device organizations and pharma companies might train clinicians? Wim: It's a good question. I mean, normally, you know, those kind of programs, they would typically not be captured, you know, by the definition of a medical device, you know, through the MDR. So they'd most likely be out of scope, unless it is programs that are actually kind of extending also to a real life kind of diagnosis or cure or treatment kind of, you know, helping the physician, I mean, to make their own decision. But if it's really about kind of training, it normally would fall out of scope. And that'd be very different right here with the AI Act, actually, it would be kind of captured, it would be qualified as a high risk class. And what it would mean is that maybe different from a medical device, you know, manufacturer that would be very kind of used to a lot of the concepts that are used in the AI Act as well. And
Reed Smith emerging tech lawyers Andy Splittgerber in Munich and Cynthia O’Donoghue in London join entertainment & media lawyer Monique Bhargava in Chicago to delve into the complexities of AI governance. From the EU AI Act to US approaches, we explore common themes, potential pitfalls and strategies for responsible AI deployment. Discover how companies can navigate emerging regulations, protect user data and ensure ethical AI practices. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Andy: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape globally. Today, we'll focus on AI and governance with a main emphasis on generative AI in a regional perspective if we look into Europe and the US. My name is Andy Splittgerber. I'm a partner in the Emerging Technologies Group of Reed Smith in Munich, and I'm also very actively advising clients and companies on artificial intelligence. Here with me, I've got Cynthia O'Donoghue from our London office and Nikki Bhargava from our Chicago office. Thanks for joining.  Cynthia: Thanks for having me. Yeah, I'm Cynthia O'Donoghue. I'm an emerging technology partner in our London office, also currently advising clients on AI matters.  Monique: Hi, everyone. I'm Nikki Bhargava. I'm a partner in our Chicago office and our entertainment and media group, and really excited to jump into the topic of AI governance. So let's start with a little bit of a basic question for you, Cynthia and Andy. What is shaping how clients are approaching AI governance within the EU right now?  Cynthia: Thanks, Nikki. The EU is, let's say, just received a big piece of legislation, went into effect on the 2nd of October that regulates general purpose AI and high risk general purpose AI and bans certain aspects of AI. But that's only part of the European ecosystem. The EU AI Act essentially will interplay with the General Data Protection Regulation, the EU's Supply Chain Act, and the latest cybersecurity law in the EU, which is the Network and Information Security Directive No. 2. so essentially there's a lot of for organizations to get their hands around in the EU and the AI act has essentially phased dates of effectiveness but the the biggest aspect of the EU AI act in terms of governance lays out quite a lot and so it's a perfect time for organizations to start are thinking about that and getting ready for various aspects of the AAC as they in turn come into effect. How does that compare, Nikki, with what's going on in the U.S.?  Monique: So, you know, the U.S. is still evaluating from a regulatory standpoint where they're going to land on AI regulation. Not to say that we don't have legislation that has been put into place. We have Colorado with the first comprehensive AI legislation that went in. And we also had, you know, earlier in the year, we also had from the Office of Management and Budget guidelines to federal agencies about how to procure and implement AI, which has really informed the governance process. And I think a lot of companies in the absence of regulatory guidance have been looking to the OMB memo to help inform what their process may look like. And I think the one thing I would highlight, because we're sort of operating in this area of unknown and yet-to-come guidance, that a lot of companies are looking to their existing governance frameworks right now and evaluating how they're both from a company culture perspective, a mission perspective, their relationship with consumers, how they want to develop and implement AI, whether it's internally or externally. And a lot of the governance process and program pulls guidance from some of those internal ethics as well.  Cynthia: Interesting, so I’d say somewhat similar in the EU, but I think, Andy, the consumer, I think the US puts more emphasis on, consumer protection, whereas the EU AI Act is more all-encompassing in terms of governance. Wouldn't you agree?  Andy: Yeah, that was also the question I wanted to ask Nikki, is where she sees the parallels and whether organizations, in her view, can follow a global approach for AI are ai governance and yes i like for the for the question you asked yes i mean the AI act is the European one is more encompassing it is i'm putting a lot of obligations on developers and deployers like companies that use ai in the end of course it also has the consumer or the user protection in the mind but the rules directly rated relating to consumers or users are I would say yeah they're limited. So yeah Nikki well what what's kind of like you always you always know US law and you have a good overview over European laws what is we are always struggling with all the many US laws so what's your thought can can companies in terms of AI governance follow a global approach?  Monique: In my opinion? Yeah, I do think that there will be a global approach, you know, the way the US legislates, you know, what we've seen is a number of laws that are governing certain uses and outputs first, perhaps because they were easier to pass than such a comprehensive law. So we see laws that govern the output in terms of use of likenesses, right, of publicity violations. We're also seeing laws come up that are regulating the use of personal information and AI as a separate category. We're also seeing laws, you know, outside of the consumer, the corporate consumer base, we're also seeing a lot of laws around elections. And then finally, we're seeing laws pop up around disclosure for consumers that are interacting with AI systems, for example, AI powered chatbots. But as I mentioned, the US is taking a number of cues from the EU AI Act. So for example, Colorado did pass a comprehensive AI law, which speaks to both obligations for developers and obligations to deployers, similar to the way the EU AI Act is structured, and focusing on what Colorado calls high risk AI systems, as well as algorithmic discrimination, which I think doesn't exactly follow the EU AI Act, but draws similar parallels, I think pulls a lot of principles. That's the kind of law which I really see informing companies on how to structure their AI governance programs, probably because the simple answer is it requires deployers at least to establish a risk management policy and procedure and an impact assessment for high risk systems. And impliedly, it really requires developers to do the same. Because developers are required to provide a lot of information to deployers so that deployers can take the legally required steps in order to deploy the AI system. And so inherently, to me, that means that developers have to have a risk management process themselves if they're going to be able to comply with their obligations under Colorado law. So, you know, because I know that there are a lot of parallels between what Colorado has done, what we see in our memo to federal agencies and the EU AI Act, maybe I can ask you, Cynthia and Andy, to kind of talk a little bit about what are some of the ways that companies approach setting up the structure of their governance program? What are some buckets that it is that they look at, or what are some of the first steps that they take?  Cynthia: Yeah, thanks, Nikki. I mean, it's interesting because you mentioned about the company-specific uses and internal and external. I think one thing, you know, before we get into the governance structure or maybe part of thinking about the governance structure is that for the EU AI Act, it also applies to employee data and use of AI systems for vocational training, for instance. So I think in terms of governance structure. Certainly from a European perspective, it's not necessarily about use cases, but about really whether you're using that high risk or general purpose AI and, you know, some of the documentation and certification requirements that might apply to the high risk versus general purpose. But the governance structure needs to take all those kinds of things into account. Account so you know obviously guidelines and principles about the you know how people use external AI suppliers how it's going to be used internally what are the appropriate uses you know obviously if it's going to be put into a chatbot which is the other example you used what are rules around acceptable use by people who interact with that chatbot as well as how is that chatbot set up in terms of what would be appropriate to use it for. So what are the appropriate use cases? So, you know, guidelines and policies, definitely foremost for that. And within those guidelines and policies, there's also, you know, the other documents that will come along. So terms of use, I mentioned acceptable use, and then guardrails for the chatbot. I mean, I mean, one of the big things for EU AI is human intervention to make sure if there's any anomalies or somebody tries to game it, that there can be intervention. So, Andy, I think that dovetails into the risk management process, if you want to talk a bit more about that.  Andy: Yeah, definitely. I mean, the risk management process in the wider sense, of course, like how do organizations start this at the moment is first setting up teams or you know responsible persons within the organization that take care of this and we're gonna discuss a bit later on how that structure can look like and then of course the policies you mentioned not only regarding the use but also how to or which process to follow when AI is being used or even the question what is AI and how do we at all find out in our org
Reed Smith partners share insights about U.S. Department of Health and Human Services initiatives to stave off misuse of AI in the health care space. Wendell Bartnick and Vicki Tankle discuss a recent executive order that directs HHS to regulate AI’s impact on health care data privacy and security and investigate whether AI is contributing to medical errors. They explain how HHS collaborates with non-federal authorities to expand AI-related protections; and how the agency is working to ensure that AI outputs are not discriminatory. Stay tuned as we explore the implications of these regulations and discuss the potential benefits and risks of AI in healthcare.  ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Wendell: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in healthcare. My name is Wendell Bartnick. I'm a partner in Reed Smith's Houston office. I have a degree in computer science and focused on AI during my studies. Now, I'm a tech and data lawyer representing clients in healthcare, including providers, payers, life sciences, digital health, and tech clients. My practice is a natural fit given all the innovation in this industry. I'm joined by my partner, Vicki Tankle.  Vicki: Hi, everyone. I'm Vicki Tankle, and I'm a digital health and health privacy lawyer based in Reed Smith's Philadelphia office. I've spent the last decade or so supporting health industry clients, including healthcare providers, pharmaceutical and medical device manufacturers, health plans, and technology companies navigate the synergies between healthcare and technology and advising on the unique regulatory risks that are created when technology and innovation far outpace our legal and regulatory frameworks. And we're oftentimes left managing risks in the gray, which as of today, July 30th, 2024, is where we are with AI and healthcare. So when we think about the use of AI in healthcare today, there's a wide variety of AI tools that support the health industry. And among those tools, a broad spectrum of the use of health information, including protected health information, or PHI, regulated by HIPAA, both to improve existing AI tools and to develop new ones. And if we think about the spectrum as measuring the value or importance of the PHI, the individuals individuals identifiers themselves, it may be easier to understand that the far ends of the spectrum and easier to understand the risks at each end. Regulators in the industry have generally categorized use of PHI in AI into two buckets, low risk and high risk. But the middle is more difficult and where there can be greater risk because it's where we find the use or value of PHI in the AI model to be potentially debatable. So on the one hand of the spectrum, for example, the lower risk end, there are AI tools such as natural language processors, where individually identifiable health information is not centric to the AI model. But instead, for this example, it's the handwritten notes of the healthcare professional that the AI model learns from. And with more data and more notes, the tool's recognition of the letters themselves, not the words the letters form, such as patient's name, diagnosis, or lab results, the better the tool operates. Then on the other hand of the spectrum, the higher risk end, there are AI tools such as patient-facing next best action tools that are based on an individual's patient medical history, their reported symptoms, their providers, their prescribed medications, potentially their physiological measurements, or similar information, and they offer real-time customized treatment plans with provider oversight. Provider-facing clinical decision support tools similarly support the diagnosis and treatment of individual patients based on individual's information. And then in the middle of the spectrum, we have tools like hospital logistics planners. So think of tools that think about when the patient was scheduled for an x-ray, when they were transported to the x-ray department, how long did they wait before they got the x-ray, and how long after they received the x-ray were they provided with the results. These tools support population-based activities that relate to improving health or reducing costs, as well as case management and care coordination, which begs the question, do we really need to know that patient's identity for the tool to be useful? Maybe yes, if we also want to know the patient's sex, their date of birth, their diagnosis, date of admission. Otherwise, we may want to consider whether this tool can be done and be effective without that individually identifiable information. What's more is that there's no federal law that applies to the use of regulated health data in AI. HIPAA was first enacted in 1996 to encourage healthcare providers and insurers to move away from paper medical and billing records and to get online. And so when HIPAA has been updated over the years, the law still remains outdated in that it does not contemplate the use of data to develop or improve AI. So we're faced with applying an old statute to new technology and data use. Again, operating in a gray area that's not uncommon in digital health or for our clients. And to that end, there are several strategies that our HIPAA-regulated clients are thinking of when they're thinking of permissible ways to use PHI in the context of AI. So treatment, payment, healthcare operations activities for covered entities, proper management and administration for business associates, certain research activities and individual authorizations, or de-identified information are all strategies that our clients are currently thinking through in terms of permissible uses of PHI in AI. Wendell: So even though HIPAA hasn't been updated to apply directly to AI, that doesn't mean that HHS has ignored it. So AI, as we all know, has been used in healthcare for many years. And in fact, HHS has actually issued some guidance previously. Under the White House's Executive Order 14110, back in the fall of 2023, which was called Safe, secure, and trustworthy development and use of artificial intelligence, jump-started additional HHS efforts. So I'm going to talk about seven items in that executive order that apply directly to the health industry, and then we'll talk about what HHS has done since this executive order. So first, the executive order requires the promotion of additional investment in AI, and just to help prioritize AI projects, including safety and privacy and security. The executive order also requires that HHS create an AI task force that is supposed to meet and create a strategic plan that covers several topics with AI, including AI-enabled technology, long-term safety and real-world performance monitoring, equity principles, safety, privacy, and security, documentation, state and local rules, and then promotion of workplace efficiency and satisfaction. faction. Third, HHS is required to establish an AI safety program that is supposed to identify and track clinical errors produced by AI and store that in a centralized database for use. And then based on what that database contains, they're supposed to propose recommendations for preventing errors and then avoiding harms from AI. Fourth, the executive order requires that all federal agencies, including HHS, focus on increasing compliance with existing federal law on non-discrimination. Along with that includes education and greater enforcement efforts. Fifth, HHS is required to evaluate the current quality of AI services, and that means developing policies and procedures and infrastructure for overseeing AI quality, including with respect to medical devices. Sixth, HHS is required to develop a strategy for regulating the use of AI in the drug development process. Of course, FDA has already been regulating this space for a while. And then seventh, the executive order actually calls on Congress to pass a federal privacy law. But even without that, HHS's AI task force is including privacy and security, as part of its strategic plan. So given those seven requirements really for HHS to cover, what have they done since the fall of 2023? Well, as the end of July 2024, HHS has created a funding opportunity for applicants to receive money if they develop innovative ways to evaluate and improve the quality of healthcare data used by AI. HHS has also created the AI task force. And many of our clients are asking us, you know, about AI governance. What can they do to mitigate risk from AI? And HHS has, the task force has issued a plan for state, local, tribal, and territorial governments related to privacy, safety, security, bias, and fraud. And even though that applies to the public sector, Our private sector clients should take a look at that so that they know what HHS is thinking in terms of AI governance. Along with this publication, NIST also produces several excellent resources that companies can use to help them with their AI governance journey. Also important is that HHS has recently restructured internally to try to consolidate HHS's ability to regulate technology and areas connected to technology and place that under ONC. And ONC, interestingly enough, has posted job postings for a chief AI officer, a chief technology officer, and a chief data officer. So we would expect that once those roles are filled, they will be highly influential in how HHS looks at AI, both internally and then also externally, and how it will impact the strategic thinking and p
AI-driven autonomous ships raise legal questions, and shipowners need to understand autonomous systems’ limitations and potential risks. Reed Smith partners Susan Riitala and Thor Maalouf discuss new kinds of liability for owners of autonomous ships, questions that may occur during transfer of assets, and new opportunities for investors. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Susan: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. And today we will focus on AI in shipping. My name is Susan Riitala. I'm a partner in the asset finance team of the transportation group here in the London office of Reed Smith.  Thor: Hello, I'm Thor Maalouf. I'm also a partner in the transportation group at Reed Smith, focusing on disputes.  Susan: So when we think about how AI might be relevant to shipping, One immediate thing that springs to mind is the development of marine autonomous vessels. So, Thor, please can you explain to everyone exactly what autonomous vessels are?  Thor: Sure. So, according to the International Maritime Organization, the IMO, a maritime autonomous surface ship or MASS is defined as a ship which, to a varying degree, can operate independently of human interaction. Now, that can include using technology to carry out various ship-related functions like navigation, propulsion, steering, and control of machinery, which can include using AI. In terms of real-world developments, at this year's meeting of the IMO's working group on autonomous vessels, which happened last month in June, scientists from the Korean Research Institute outlined their work on the development and testing of intelligent navigation systems for autonomous vessels using AI. That system was called NEEMO. It's undergone simulated and virtual testing, as well as inland water model tests, and it's now being installed on a ship with a view to being tested at sea this summer. Participants in that conference also saw simulated demonstrations from other Korean companies like the familiar Samsung Heavy Industries and Hyundai of systems that they're trialing for autonomous ships, which include autonomous navigation systems using a combination of AI, satellite technology and cameras. And crewless coastal cargo ships are already operating in Norway, and a crewless passenger ferry is already being used in Japan. Now, fundamentally, autonomous devices learn from their surroundings, and they complete tasks without continuous human input. So, this can include simplifying automated tasks on a vessel, or a vessel that can conduct its entire voyage without any human interaction. Now, the IMO has worked on categorizing a spectrum of autonomy using different degrees and levels of automation. So the lowest level still involves some human navigation and operation, and the highest level does not. So for example, the IMO has a degree Degree 1 of autonomy, a ship with just some automated processes and decision support, where there are seafarers on board to operate and control shipboard systems and functions. But there are some operations which can be automated at times and be unsupervised. Now, as that moves up through the degrees, we get to, for example, Degree 3, where you have a remotely controlled ship without seafarers on board the ship. The ship will be controlled and operated from a remote location. All the way up to degree four, the highest level of automation, where you have a fully autonomous ship, where the operating systems of the ship are able to make their own decisions and determine their own actions without human interaction. action.  Susan: Okay, so it seems like from what you said, there are potentially a number of legal challenges that could arise from the increased use of autonomy in shipping. So for example, how might the concept of seaworthiness apply to autonomous vessels, especially ones where you have no crew on board?  Thor: Yeah, that's an interesting question. So the requirement for seaworthiness is generally met when a vessel's properly constructed, prepared, manned and equipped for the voyage that's intended. Now, in the case of autonomous vessels, they're not going to be able to. The kind of query turns to how a shipowner can actually warrant that a vessel is properly manned for the intended voyage where some systems are automated. What standard of autonomous or AI-assisted watchkeeping setup could be sufficient to qualify as having excised due diligence? A consideration is of course whether responsibility for seaworthiness could actually be shifted from the shipowner to the manufacturer of the automated functions or or the programmer of the software of the automated functions on board the vessel as you're aware the concept of seaworthiness is one of many warranties that's regularly incorporated in contracts for the use of ships and for carriage of cargo. And a ship owner can be liable for the damage that results if there's an incident before which the ship owner has failed to exercise due diligence to make the ship seaworthy. And this, in English law, is judged by the standard of what level of diligence would be reasonable for a reasonably prudent ship owner. That's true even if there has been a subsequent nautical fault on board. But how much oversight and knowledge of workings of an autonomous or AI-driven system could a prudent ship owner actually have? I mean, are they expected to be a software or AI expert? Under the existing English law on unseaworthiness, a shipowner or a carrier might not be responsible for faults made by an independent contractor before the ship came into their possession or before it came into their orbit. So potentially faults made during the shipbuilding process. So to what extent could any faults in an AI or autonomous system be treated in that way? Perhaps a ship owner or carrier could claim a defect in an autonomous system came about before the vessel came into their orbit and therefore they're potentially not responsible for subsequent unseaworthiness or incidents that result. There's also typically an exception to a ship owner's liability for navigational faults on board the vessel if that vessel has passed a seaworthiness test. But if certain crew and management functions have been replaced by autonomous AI systems on board, how could we assess whether there's or not there has actually been a navigational fault for which the owners might escape liability or pre-existing issue of unseaworthiness, so a pre-existing hardware or software glitch? This opens up a whole new line of inquiry as to at what might have happened behind the software code or the protocols of the autonomous system on board and the legal issues of responsibility of the ship owner and the subsequent applicable liability for any incidents which might have been caused by unseaworthiness are going to involve a significant legal inquiry and in new areas where it comes to autonomous vessels.  Susan: Sounds very interesting. And I guess that makes me think of, I guess, a wider issue that crewing is only part of, which would be standards and regulations relating to autonomous vessels. And obviously, as a finance lawyer, that would be something my clients will be particularly interested in, in terms of what standards are there in place so far for autonomous vessels and what regulation can we expect in the future?  Thor: Sure. Well, the answer is at the moment, there's not very much. So as I've mentioned already, the IMO has established a working group on autonomous vessels. And the aim of that IMO working group is to adopt a non-mandatory goal-based code for autonomous vessels, the MASS code, which will aim to be in place by 2025. But like I said, that will be non-mandatory, and that will then form the basis for what's intended to be a mandatory MASS Code, which is expected to come into force on the 1st of January 2028. Now, the MASS Code working group last met in May of this year. And it reports on a number of recommendations for inclusion in the initial voluntary MASS Code. Interestingly, one of those recommendations was for all autonomous vessels, so even the fully autonomous degree four vessels, to have a human being, a person in charge designated as the master even if that person is remote at all times so that may rule out a fully autonomous non-supervised vessel from being compliant with the code. So mandatory standards still very much under develop in development and not currently in force until 2028 at the moment that doesn't mean to say there won't be national regulations or flag regulations covering those vessels before then.  Susan: Right. And then I guess another area there would be insurance. I mean, what happens if something happens to a vessel? I mean, I'm looking at it from a financial perspective, of course, but obviously for ship owners as well, insurance will be the key source of recovery. So what kinds of insurance products would already be available for autonomous vessels?  Thor: Well, good to know that some of the insurers are already offering products covering autonomous vessels. So just having Googled what's available the other day, I bumped into Ship Owners Club, which holds entries for between 50 and 80 autonomous vessels under their All Risks P&I cover. And it seems that Guard is also providing hull and machinery and P&I cover for autonomous vessels. And I can see that their industry is definitely taking steps to get to grips with cover for autonomous vessels. So hull and P&I cover is definitely out there. So we've cove
In this episode, we explore the intersection of artificial intelligence and German labor law. Labor and employment lawyers Judith Becker and Elisa Saier discuss key German employment laws that must be kept in mind when using AI in the workplace; employer liability for AI-driven decisions and actions; the potential elimination of jobs in certain professions by AI and the role of German courts; and best practices for ensuring fairness and transparency when AI has been used in hiring, termination and other significant personnel actions. ----more---- Transcript:  Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday. Judith: Hello, everyone. Welcome to Tech Law Talks and to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in the workplaces in Germany. We would like to walk you through the employment-level landscape in Germany and would also like to give you a brief outlook on what's yet to come, looking at the recently adopted EU regulation on artificial intelligence, the so-called European Union AI Act. My name is Judith Becker. I'm a counsel in the Labor and Employment Group at Reed Smith. I'm based at the Reed Smith office in Munich, and I'm here with my colleague Elisa Saier. Elisa is an associate in the Labor and Employment Law Group, and she's also based in the Reed Smith office in Munich. So, Elisa, we are both working closely with the legal and HR departments of our clients. Where do you already come across AI in employments in Germany and what kind of use can you imagine in the future? Elisa: Thank you, Judith. I am happy to provide a brief overview of where AI is already being used in working life and in employment law practice. The use of AI in employment law practice is not only increasing worldwide, but certainly also in Germany. For example, workforce planning and recruiting can be supported by AI. Therefore, already a pretty large number of AI tools does exist for recruiting, for example, in the job description and advertisement, the actual search and screening of applicants, as well as in the interview process, the selection and hiring of the right match, and finally the onboarding process. AI-powered recruiting platforms can make the process of finding and hiring talents more efficient, objective, and data-driven. These platforms use advanced algorithms to quickly scan CVs and applications and automatically pre-select applicants based on criteria such as experience, skills, and educational background. This does not only save time, but also improves the accuracy of the match between candidates and vacancies. In the area of employee evaluation, artificial intelligence offers the opportunity to continually analyze performance data and evaluate them. This enables managers to make well-founded decisions about promotions, salary adjustments, and further training requirements. AI is also used in the field of employee compensation. By analyzing large amounts of data, AI can identify current market trends and industry-specific salary benchmarks. This enables companies to adjust their salaries to the market faster and more accurately than with traditional methods. When terminating employment relationships, AI can be used with the social selection process, the calculation of severance payments, and drafting of warnings and termination letters. Finally, AI can support compliance processes, for example, in the investigation of whistleblowing reports received via Ethic Hotline. Overall, it is fair to say that AI has arrived in practice in the German workplace. This certainly raises questions about the legal framework for the use of AI in the employment context. Judith, could you perhaps explain which legal requirements employers need to consider if they want to use AI in the context of employment? Judith: Yes, thank you, Elisa. Sure. The German legislature has so far hardly provided any AI-specific regulations in the context of employment. AI has only been mentioned in a few isolated instances in German employment laws. However, this does not mean that employers in Germany are in a legal vacuum when they use AI. There are, of course, general and not AI-specific employment laws and employment law principles that apply in the context of using AI in the workplace. In the next few minutes, we would like to give you an overview on the most relevant of these employment laws that German-based employers should have in their mind when they use AI. Now, I would like to start with the General Equal Treatment Act, the so-called AGG. Employers in Germany should definitely have that act in mind as it applies and it can also be violated even if AI is interposed for certain actions. According to this act, discrimination against job applicants and employees during their employments on the grounds of race or ethnic origin, gender, religion or belief, disability, age or sexual orientation is generally speaking prohibited. Although AI is typically regarded as something which is being objective, AI can also have biases and as a result the use of AI can also lead to discriminatory decisions. This may occur when, for example, training data the AI is trained with itself is based on human biases, and also if the AI is programmed in a way that is discriminatory. Currently, for example, as Elisa explained in the beginning, AI is very often used to optimize the application proceedings and when a biased AI is used here, for example, for selecting or for rejecting applicants, this can lead to violations of the General Equal Treatment Act. And since AI is not a legal subject itself, this discrimination would be attributable to the employer that is using the AI. And the result would then be, in the event of a breach of the Act, that the employer is exposed to claims for damages and compensation payments. And in this context, it is important to know that under the German Equal Treatment Act, the employee only has to demonstrate that there are indications that suggest the discrimination. So if the employee is able to do so, then the burden of proof shifts to the employer and the employer must then prove that there was in fact no such discrimination. And when an employer uses AI due to the complexity and the technical complexity that is involved, that can be quite challenging. In this regard, we think that a human control of the AI system is key and should be maintained. As we heard from Elisa in the beginning, AI is not only used in the hiring process, but also in the course of the employment. One question that came up here is whether AI can function as a superior itself and whether AI can give work instructions to employees. So the initial answer here is yes. German law does not stipulate any obligations that work instructions have to be given by a human being. Therefore, just as it is possible to delegate the right to give instructions to a manager or to another superior, it is also possible to enable an AI system to give instructions to the employees. In this context, it is important to recall, however, that the instructions are, of course, again attributable to the employer. And if the AI instructs in a way that is, for example, outside of the reasonable discretion or gives instructions which are outside of the employee's contract, then this instruction would, of course, be unlawful and that would be attributable to the employer as well. One aspect that I would like to point out here is that if an AI system would lead to a decision towards the employee that has legal effects and impacts the employee in a very significant way, then such decisions may not be made exclusively by an AI. This is because of a principle that is to be found in the data protection laws, and Elisa will explain on this in greater detail. Another aspect of AI in the course of employment is whether employers can instruct their employees to use AI. Again, here the answer is yes. This is a part of the employer's right to give instructions, and this right covers not only if employees, should use AI at all or if they are prohibited to use it. It also covers what kind of AI can be used and to avoid any misunderstandings and to provide for clarity here, we advise that employees should have a clear AI policy in place so that the employees know what the expectations are. And what they are allowed to do and what they are not allowed to do. And in this context, we think it is also very important to address confidentiality issues and also IP aspects, in particular, if publicly accessible AI is used, such as chat GPT. Elisa: Yes, that's true, Judith. I agree with everything you said. In connection with the employer's right to issue instructions, the question also arises as to the extent to which employees may use AI to perform their work. The principle here is that if the employer provides its employees with a specific AI application, they are allowed to use it accordingly. Otherwise, however, things can get more complicated. This is because under German law, employees are generally required to carry out their work personally. This means that they are generally not allowed to have other persons to do their work in their place. The key factor is likely to be whether the AI application is used to support the employee in performing a task or whether the AI application performs the task alone. The scope of the use of AI is certainly relevant here as well. If employees limit themselves to give instructions to the AI application for a work task and simply copy the result, this can be an indication for a breach of the personal work perform
Reed Smith partners Howard Womersley Smith and Bryan Tan with AI Verify community manager Harish Pillay discuss why transparency and explain-ability in AI solutions are essential, especially for clients who will not accept a “black box” explanation. Subscribers to AI models claiming to be “open source” may be disappointed to learn the model had proprietary material mixed in, which might cause issues. The session describes a growing effort to learn how to track and understand the inputs used in AI systems training. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. My name is Bryan Tan and I'm a partner at Reed Smith Singapore. Today we will focus on AI and open source software.  Howard: My name is Howard Womersley Smith. I'm a partner in the Emerging Technologies team of Reed Smith in London and New York. And I'm very pleased to be in this podcast today with Bryan and Harish.  Bryan: Great. And so today we have with us Mr. Harish Pillay. And before we start, I'm going to just ask Harish to tell us a little bit, well, not really a little bit, because he's done a lot about himself and how he got here.  Harish: Well, thanks, Bryan. Thanks, Howard. My name is Harish Pillay. I'm based here in Singapore, and I've been in the tech space for over 30 years. And I did a lot of things primarily in the open source world, both open source software, as well as in the hardware design and so on. So I've covered the spectrum. When I was way back in the graduate school, I did things in AI and chip design. That was in the late 1980s. And there was not much from an AI point of view that I could do then. It was the second winter for AI. But in the last few years, there was the resurgence in AI and the technologies and the opportunities that can happen with the newer ways of doing things with AI make a lot more sense. So now I'm part of an organization here in Singapore known as AI Verify Foundation. It is a non-profit open-source software foundation that was set up about a year ago to provide tools, software testing tools, to test AI solutions that people may be creating to understand whether those tools are fair, are unbiased, are transparent. There's about 11 criteria it tests against. So both traditional AI types of solutions as well as generative AI solutions. So these are the two open source projects that are globally available for anyone to participate in. So that's currently what I'm doing.  Bryan: Wow, that's really fascinating. Would you say, Harish, that kind of your experience over the, I guess, the three decades with the open source movement, with the whole Linux user groups, has that kind of culminated in this place where now there's an opportunity to kind of shape the development of AI in an open-source context?  Harish: I think we need to put some parameters around it as well. The AI that we talk about today could never have happened if it's not for open-source tools. That is plain and simple. So things like TensorFlow and all the tooling that goes around in trying to do the model building and so on and so forth could not have happened without open source tools and libraries, a Python library and a whole slew of other tools. If these were all dependent on non-open source solutions, we will still be talking about one fine day something is going to happen. So it's a given that that's the baseline. Now, what we need to do is to get this to the next level of understanding as to what does it mean when you say it's open source and artificial intelligence or open source AI, for that matter. Because now we have a different problem that we are trying to grapple with. The problem we're trying to grapple with is the definition of what is open-source AI. We understand open-source from a software point of view, from a hardware point of view. We understand that I have access to the code, I have access to the chip designs, and so on and so forth. No questions there. It's very clear to understand. But when you talk about generative AI as a specific instance of open-source AI, I can have access to the models. I can have access to the weights. I can do those kinds of stuff. But what was it that made those models become the models? Where were the data from? What's the data? What's the provenance of the data? Are these data openly available? Or are they hidden away somewhere? Understandably, we have a huge problem because in order to train the kind of models we're training today, it takes a significant amount of data and computing power to train the models. The average software developer does not have the resources to do that, like what we could do with a Linux environment or Apache or Firefox or anything like that. So there is this problem. So the question still comes back to is, what is open source AI? So the open source initiative, OSI, is now in the process of formulating what does it mean to have open source AI. The challenge we find today is that because of the success of open source in every sector of the industry, you find a lot of organizations now bending around and throwing around the label, our stuff is open source, our stuff is open source, when it is not. And they are conveniently using it as a means to gain attention and so on. No one is going to come and say, hey, do you have a proprietary tool? Adding that ship has sailed. It's not going to happen anymore. But the moment you say, oh, we have an open source fancy tool, oh, everybody wants to come and talk to you. But the way they craft that open source message is actually quite sadly disingenuous because they are putting restrictions on what you can actually do. It is contrary completely to what the open-source licensing says in open-source initiative. I'll pause there for a while because I threw a lot of stuff at you.  Bryan: No, no, no. That's a lot to unpack here, right? And there's a term I learned last week, and it's called AI washing. And that's where people try to bandy the terms, throw it together. It ends up representing something it's not. But that's fascinating. I think you talked a little bit about being able to see what's behind the AI. And I think that's kind of part of those 11 criteria that you talked about. I think auditability, transparency would be kind of one of those things. I think we're beginning to go into some of the challenges, kind of pitfalls that we need to look out for. But I'm going to just put a pause on that and I'm going to ask Howard to jump in with some questions on his phone. I think he's got some interesting questions for you also.  Howard: Yeah, thank you, Bryan. So, Harris, you spoke about the open source initiative, which we're very familiar with, and particularly the kind of guardrails that they're putting around what open source should be applied to AI systems. You've got a separate foundation. What's your view on where open source should feature in AI systems?  Harish: It's exactly the same as what OSI says. We are making no difference because the moment you make a distinction, then you bifurcate or you completely fragment the entire industry. You need to have a single perspective and a perspective that everybody buys into. It is a hard sell currently because not everybody agrees to the various components inside there, but there is good reasoning for some of the challenges. But at the same time, if that conversation doesn't happen, we have a problem. But from AI Verify Foundation perspective, it is our code that we make. Our code, interestingly, it's not an AI tool. It is a testing tool. It is written purely to test AI solutions. And it's on an Apache license. This is a no-brainer type of licensing perspective. It's not an AI solution in and of itself. It's just taking an input, run through the test, and spit out an output, and Mr. Developer, take that and do what you want with it.  Howard: Yeah, thank you for that. And what about your view on open source training data? I mean, that is really a bone of contention.  Harish: That is really where the problem comes in because I think we do have some open source trading data, like the Common Crawl data and a whole slew of different components there. So as long as you stick to those that have been publicly available and you then train your models based on that, or you take models that were trained based on that, I think we don't have any contention or any issue at the end of the day. You do whatever you want with it. The challenge happens when you mix the trading data, whether it was originally Common Crawl or any of the, you know, creative license content, and you mix it with non-licensed or licensed under proprietary stuff with no permission, and you mix it up, then we have a problem. And this is actually an issue that we have to collectively come to an agreement as to how to handle it. Now, should it be done on a two-tier basis? Should it be done with different nuances behind it? This is still a discussion that is ongoing, constantly ongoing. And OSI is taking the mother load of the weight to make this happen. And it's not an easy conversation to have because there's many perspectives.  Bryan: Yeah, thank you, for that. So, Harish, just coming back to some of the other challenges that we see, what kind of challenges do you foresee the continued development of open source with AI we'll see in the near future you've already said we've encountered some of them some of the the problems are really kind of in the sense a man-ma
loading
Comments