AI explained: AI and cybersecurity threat
Description
Our latest podcast covers the legal and practical implications of AI-enhanced cyberattacks; the EU AI Act and other relevant regulations; and the best practices for designing, managing and responding to AI-related cyber risks. Partner Christian Leuthner in Frankfurt, partner Cynthia O'Donoghue in London with counsel Asélle Ibraimova share their insights and experience from advising clients across various sectors and jurisdictions. ----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Christian: Welcome to the Tech Law Talks, now new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI and cybersecurity threat. My name is Christian Leutner. I'm a partner at the Reed Smith Frankfurt office, and I'm with my colleagues Cynthia O'Donoghue and Asélle Ibraimova from the London office.
Cynthia: Morning, Christian. Thanks.
Asélle: Hi, Christian. Hi, Cynthia. Happy to be on this podcast with you.
Christian: Great. In late April 2024, the German Federal Office for Information Security has identified that AI, and in particular, generative AI and large language models, LLM, is significantly lowering the barriers to entry for cyber attacks. The technology, so AI, enhances the scope, speed, and the impact of cyber attacks, of malicious activities, because it simplifies social engineering, and it really makes the creation or generation of malicious code faster, simpler, and accessible to almost everybody. The EU legislator had some attacks in mind when creating the AI Act. Cynthia, can you tell us a bit about what the EU regulator particularly saw as a threat?
Cynthia: Sure, Christian. I'm going to start by saying there's a certain irony in the EU AI Act, which is that there's very little about the threat of AI, even though sprinkled throughout the EU AI Act is lots of discussion around security and keeping AI systems safe, particularly high-risk systems. But the EU AI Act contains a particular article that's focused on the design of high-risk systems and cybersecurity. And the main concern is really around the potential for data poisoning and for model poisoning. And so part of the principle behind the EU AI Act is security by design. And so the idea is that the EU AI Act regulates high-risk AI systems such that they need to be designed and developed in a way that ensures an appropriate level of accuracy robustness, and cybersecurity. And to prevent such things as data poisoning and model poisoning. And it also talks about the horizontal laws across the EU. So because the EU AI Act treats AI as a product, it brings into play other EU directives, like the Directive on the Resilience of Critical Entities and the newest cybersecurity regulation in relation to digital products. And I think when we think about AI, you know, most of our clients are concerned about the use of AI systems and being, let's say, ensuring that they're secure. But really, you know, based on that German study you mentioned at the beginning of the podcast, I think there's less attention paid to use of AI as a threat vector for cybersecurity attacks. So, Christian, what do you think is the relationship between the AI Act and the Cyber Resilience Act, for instance?
Christian: Yeah, I think, and you mentioned it already. So the legislator thought there is a link and the high-risk AI models need to implement a lot of security measures. And the latest Cyber Resilience Act requires some stakeholders in software and hardware products to also implement security measures and also imposes another or lots of different obligations on them. To not over-engineer these requirements, the AI Act already takes into account that if a high-risk AI model is in scope of the Cyber Resilience Act, the providers of those AI models can refer to the implementation of the cybersecurity requirements they made under the Cyber Resilience Act. So they don't need to double their efforts. They can just rely on what they have implemented. But it would be great if we're not only applying the law, but if there would also be some guidance from public bodies or authorities on that. Asélle, do you have something in mind that might help us with implementing those requirements?
Asélle: Yeah, so ENISA has been working on AI and cybersecurity in general, and it has produced a paper called Multi-Layer Framework for Good Cybersecurity Practices for AI last year. So it still needs to be updated. However, it does provide a very good summary of various AI initiatives throughout the world. And generally mentions that when thinking of AI, organizations need to take into consideration the general system vulnerabilities, the vulnerabilities in the underlying ICT infrastructure. And also when it comes to the use of AI models or systems, then, you know, various threats that you already talked about, such as data poisoning and model poisoning and other kind of adversarial attacks on those systems should also be taken into account. So in terms of specific kind of guidelines or standards that ENISA mentioned is ISO 4201. It's an AI management system standard. And also another noteworthy guidelines mentioned is the NIST AI risk management framework, obviously the US guidelines. And obviously both of these are to be used on a voluntary basis. But basically, their aim is to ensure developers create trustworthy AI, valid, reliable, safe, and secure and resilient.
Christian: Okay, that's very helpful. I think it's fair to say that AI will increase the already high likelihood of being subject to cyber attack at some point, that this is a real threat to our clients. And we all know from practice that you cannot defend against everything. You can be cautious, but there might be occasions when you are subject to an attack, when there has been a successful attack or there is a cyber incident. If it is caused by AI, what do we need to do as a first responder, so to say?
Cynthia: Well, there are numerous notification obligations in relation to attacks. Again, depending on the type of data or the entity involved. For instance, if the, As a result of a breach from an AI attack, it involves personal data, then there's notification requirements under the GDPR, for instance. If you're in a certain sector that's using AI, one of the newest pieces of legislation to go into effect in the EU, the Network and Information Security Directive, tiers organizations into essential entities and important entities. And, you know, depending on whether the sector the particular victim is in is subject to either, you know, the essential entity requirements or the important entity requirements, there's a notification obligation under NIST-2, for short, in relation to vulnerabilities and attacks. And ENISA, who Asélle was just talking about, has most recently issued a report for, let's say, network and other providers, which are essential entities under NIST-2, in relation to what is considered a significant or a vulnerability or a material event that would need to be notified to the regulatory entity and the relevant member state for that particular sector. And I'm sure there's other notification requirements. I mean, for instance, financial services are subject to a different regulation, aren't they, Asélle? And so why don't you tell us a bit more about the notification requirements of financial services organizations?
Asélle: The EU Digital Operational Resilience Act also provides similar requirements to the supply chain of financial entities, specifically the ICT third-party providers, which the AI providers may fall into. And Article 30 under DORA requires that there are specific, for example, contractual clauses requiring cybersecurity around data. So it requires provisions on availability, authenticity, integrity, and confidentiality. There are additional requirements to those ICT providers whose product, say AI product, perhaps as an ICT product, plays a critical or important function in the provision of the financial services. In that case... There will be additional requirements, including on ICT security measures. So in practical terms, it would mean your organizations that are regulated in this way, they are likely to ask AI providers to have additional tools, policies, measures, and to provide evidence that such measures have been taken. It's also worth mentioning about the developments on AI regulation in the UK. Previous UK government wanted to adopt a flexible, non-binding regulation of AI. However, the Labour Government appears to want to adopt a binding instrument. However, it is likely to be of limited scope, focusing only on the most powerful AI models. However, there isn't any clarity in terms of whether the use of AI in cyber threats is regulated in any specific way. Christian, I wanted to direct a question to you. How about the use of AI in supply chains?
Christian: Yeah, I think it's very important to have a look on the entire supply chain of the companies or the entire contractual relationships. Because most of our clients or companies out there do not develop or create their own AI. They will use AI from vendors or their suppliers or vendors will use AI products to be more efficient. And all the requirements, for example, the n