AI explained: AI and the impact on medical devices in the EU
Description
Regulatory lawyers Cynthia O’Donoghue and Wim Vandenberghe explore the European Union’s newly promulgated AI Act; namely, its implications for medical device manufacturers. They examine amazing new opportunities being created by AI, but they also warn that medical-device researchers and manufacturers have special responsibilities if they use AI to discover new products and care protocols. Join us for an insightful conversation on AI’s impact on health care regulation in the EU.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.
Cynthia: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI and life sciences, particularly medical devices. I'm Cynthia O’Donoghue. I'm a Reed Smith partner in the London office in our emerging technology team. And I'm here today with Wim Vandenberghe. Wim, do you want to introduce yourself?
Wim: Sure, Cynthia. I'm Wim Vandenberghe, I'm a life science partner out of the Brussels office, and my practice is really about regulatory and commercial contracting in the life science space.
Cynthia: Thanks, Wim. As I mentioned, we're here to talk about the EU AI Act that came into force on the 2nd of August, and it has various phases for when different aspects come into force. But I think a key thing for the life sciences industry and any developer or deployer of AI is that research and development activity is exempt from the EU AI Act. And the reason it was done is because the EU wanted to foster research and innovation and development. But the headline sounds great. If, as a result of research and development, that AI product is going to be placed on the EU market and developed, essentially sold or used in products in the EU, it does become regulated under the EU AI Act. And there seems to be a lot of talk about interplay between the EU AI Act and various other EU laws. So Wim, how does the AI Act interplay with the medical devices regulation, the MDR and the IVDR?
Wim: That's a good point, Cynthia. And that's, of course, you know, where a lot of the medical device companies are looking at kind of like that interplay and potential overlap between the AI Act on the one hand, which is a cross-sectoral piece of legislation. So it applies to all sorts of products and services, whereas the MDR and the IVDR are of course only applicable to medical technologies. So in summary, you know, the medical, both the AI Act and the MDR and IVDR will apply to AI systems, provided, of course, that those AI systems are in scope of the respective legislation. So maybe I'll start with the MDR and IVDR and then kind of turn to the AI Act. Under the MDR and the IVDR, of course, there's many AI solutions that are either considered to be a software as a medical device in their own right, or they are part or component of a medical technology. So to the extent that this AI system as software meets the definition of a medical device under the MDR or under the IVDR, it would actually qualify as a medical device. And therefore, the MDR and IVDR is fully applicable to those AI solutions. Stating the obvious, you know, there's plenty of AI solutions that are already now on the market and being used in a healthcare setting as well. Well, what the AI Act kind of focuses on, particularly with regard to medical technology, is the so-called high-class risk AI systems. And for a medical technology to be a high-class AI system under the AI Act, it's essentially it's a twofold kind of criteria that needs to apply. First of all, the AI solution needs to be a medical device or an in vitro diagnostic under the sector legislation, so the MDR or the IVDR, or it is a safety component of such a medical product. Safety component is not really explained in the AI Act, but think about, for example, the failure of an AI system to interpret diagnostic IVD instrument data, for example, that could endanger the health of a person by generating false positives. That would be a safety component. So that's the first step you have to see is the AI solution, does it qualify as a medical device or is it a safety component of a medical device? And the second step is that it is only for AI solution that are actually undergoing a conformity assessment by a notified body under the MDR or the IVDR. So to make kind of a long story short, it actually means that medical devices that are either a class 2A, 2B, or 3 will be in the scope of the AI Act. And for the IVDR, for in vitro diagnostics, that would be class B to D. The risk class, that would be then captured by the AI Act. So that essentially is kind of like determining the scope and the applicability of the AI Act. And Cynthia, maybe coming back to an earlier point of what you said on research, I mean, the other kind of curious thing as well that the AI Act doesn't really kind of foresee is the fact that, of course, you know, for getting an approved medical device, you need to do certain clinical investigations and studies on that medical device. So you really have to kind of test it in a real world setting. And that happens by a clinical trial, clinical investigation. The MDR and the IVDR have elaborate kind of rules about that. And the very fact that you do this prior to getting your CE mark and your approval and then launching it on the market is very standard under the MDR and the IVDR. However, under the AI Act, which also requires CE marking and approval, and we'll come to that a little bit later, there's no mentioning about such clinical and performance evaluation of medical technology. So if you would just read the AI Act like that, it would mean actually that you need to have a CE mark for such a high-risk AI system, and only then you can do your clinical assessment. And of course, that wouldn't be consistent with the MDR and the IVDR. And we can talk a little bit later about consistency between the two frameworks as well. You know, the one thing that I do see as being very new under the AI Act is everything to do around data and data governance. And I'm just, you know, kind of question, Cynthia, you know, given your experience, you know, if you can maybe talk a little bit about, you know, what are the requirements going to be for data and data governance under the AI Act?
Cynthia: Thanks, Wim. Well, the AI Act obviously defers to the GDPR, and the GDPR, which regulates how data is used and transferred outside within the EEA member states and then transferred outside the EEA, all has to interoperate with the EU AI Act. In the same way as you were just saying that the MDR, the IVDR needs to interoperate, and you touched, of course, on clinical trials, so the clinical trial regulation would also have to work and interoperate with the EU AI Act. Obviously, if you're working with medical devices, most of the time it's going to involve personal data and what is called sensitive, a special category, data concerning health about patients or participants in a clinical trial. So, you know, a key part of AI is that training data. And so the data that goes in, that's ingested into the AI system for purposes of a clinical or for a medical device needs to be as accurate as possible. And obviously the GDPR also includes a data minimization principle. So the data needs to be the minimum necessary, but at the same time. You know, that training data, you know, depending on the situation in a clinical trial might be more controlled. But once a product is put into the market, there could be data that's ingested into the AI system that has anomalies in it. You know, you mentioned about false positives, but there's also a requirement under the AI to ensure that the ethical principles in AI, which was non-binding by the EU, are adhered to. And one of those is human oversight. So obviously, if there's anomalies in the data and the outputs from the AI would give false positives or create other issues with the output that EU AI Act requires once a CE mark is obtained, just like the MDR does, for there to be a constant conformity assessment to ensure that any kind of anomalies and or the necessity for human intervention is met. Is done on a regular basis as part of reviewing the AI system itself. So we've talked about high-risk AI. We've talked a little bit about the overlap between the GDPR and the EU AI Act and the MDR and the IBDR overlap and interplay. Let's talk about some real-world examples, for instance. I mean, the EU AI Act also classes education as potentially high risk if any kind of vocational training is based solely on assessment by an AI system. How does that potentially work with the way medical device organizations and pharma companies might train clinicians?
Wim: It's a good question. I mean, normally, you know, those kind of programs, they would typically not be captured, you know, by the definition of a medical device, you know, through the MDR. So they'd most likely be out of scope, unless it is programs that are actually kind of extending also to a real life kind of diagnosis or cure or treatment kind of, you know, helping the physician, I mean, to make their own decision. But if it's really about kind of training, it normally would fall out of scope. And that'd be very different right here with the AI Act, actually, it would be kind of captured, it would be qual