DiscoverHutchins Roundup: Tax audits, growing inequality, and moreKey enforcement issues of the AI Act should lead EU trilogue debate
Key enforcement issues of the AI Act should lead EU trilogue debate

Key enforcement issues of the AI Act should lead EU trilogue debate

Update: 2023-06-16
Share

Description

By Alex Engler

On June 14th, the European Parliament passed its version of the Artificial Intelligence (AI) Act, setting the stage for a final debate on the bill between the European Commission, Council, and Parliament—called the “trilogue.” This trilogue will follow an expedited timeline—the European Commission is pushing to finish the AI Act by the end of 2023, so it can be voted through before any political impacts of the 2024 European Parliament elections. The trilogue will certainly discuss many contentious issues, including the definition of AI, the list of high-risk AI categories, whether to ban remote biometric identification, and others. However, relatively underdiscussed have been the details of implementation and enforcement of the EU AI Act, which differ meaningfully across the different AI Act proposals from the Council, Commission, and Parliament.



The Parliament proposal would centralize AI oversight in one agency per member state, while expanding the role of a coordinating AI Office, a key change from the Commission and Council. All three proposals look to engender an AI auditing ecosystem—but none have sufficiently committed to this mechanism to make it a certain success. Further, the undetermined role of civil liability looms on the horizon. These issues warrant both focus and debate, because no matter what specific AI systems are regulated or banned, the success of the EU AI Act will depend on a well-conceived enforcement structure.


One national surveillance authority, or many?


The Parliament’s AI Act contains a significant shift in the approach to market surveillance, that is, the process by which the European Union (EU) and its member states would monitor and enforce the law. Specifically, Parliament requires one national surveillance authority (NSA) in each member state. This is a departure from the Council and Commission versions of the AI Act, which would enable member states to create as many market surveillance authorities (MSA) as they prefer.


In all three AI Act proposals, there are several areas where existing agencies would be anointed as MSAs—this includes AI in financial services, AI in consumer products, and AI in law enforcement. In the Council and Commission proposals, this approach could be expanded. It allows for a member state to, for example, make its existing agency in charge of hiring and workplace issues the MSA for high-risk AI in those areas, or alternatively name the education ministry the MSA for AI in education. However, the Parliament proposal does not allow for this—aside from a few selected MSAs (e.g., finance and law enforcement), member states must create a single NSA for enforcing the AI Act. In the Parliament version, the NSA even gets some authority over consumer product regulators and can override those regulators on issues specific to the AI Act.


Between these two approaches, there are a few important trade-offs to consider. The Parliament approach through a single NSA is more likely able to hire talent, build internal expertise, and effectively enforce the AI Act, as compared to a wide range of distributed MSAs. Further, the centralization in each member state NSA means that coordination between the member states is easier—there is generally just one agency per member state to work with, and they all have a voting seat on the board that manages the AI Office, a proposed advisory and coordination body. This is clearly easier than creating a range of coordination councils between many sector-specific MSAs.


However, this centralization comes at a cost, which is that this NSA will be separated from any existing regulators in member states. This leads to the unenviable position that algorithms used for hiring, workplace management, and education will be governed by different authorities than human actions in the same exact areas. It’s also likely that the interpretation and implementation of the AI Act will suffer in some areas, since AI experts and subject matter experts will be in separate agencies. Looking at early examples of application-specific AI regulations demonstrates how complex they can be (see for instance, the complexity of a proposed U.S. rule on transparency and certification of algorithms in health IT systems or the Equal Employment Opportunity Commission’s guidance for AI hiring under the Americans with Disabilities Act).


This is a difficult decision with unavoidable trade-offs, but because the approach to government oversight affects every other aspect of the AI Act, it should be prioritized, not postponed, in trilogue discussions.


Will the AI Act engender an AI evaluation ecosystem?


Government market surveillance is only the first of two or three (the Parliament version adds individual redress) mechanisms for enforcing the AI Act. The second mechanism is a set of processes to approve organizations that would review and certify high-risk AI systems. These organizations are called ‘notified bodies’ when they receive a notification of approval from a government agency selected for this task, which itself is called a ‘notifying authority.’ This terminology can be quite confusing, but the general idea is that EU member states will approve organizations, including non-profits and companies, to act as independent reviewers of high-risk AI systems, giving them the power to approve those systems as meeting AI Act requirements.


It is the aspiration of the AI Act that this will foster a European ecosystem of independent AI assessment, resulting in more transparent, effective, fair, and risk-managed high-risk AI applications. Certain organizations already exist in this space, such as the algorithmic auditing company Eticas AI, AI services and compliance provider AppliedAI, the digital legal consultancy AWO, and the non-profit Algorithmic Audit. This is a goal that other governments, such as the UK and U.S., have encouraged through voluntary policies.


However, it is not clear that current AI Act proposals will significantly support such an ecosystem. For most types of high-risk AI, this independent review is not the only path for providers to sell or deploy high-risk AI systems. Alternatively, providers can develop AI systems to meet a forthcoming set of standards, which will be a more detailed description of the rules set forth in the AI Act, and simply self-attest that they have done so, along with some reporting and registration requirements.


The independent review is intended to be based on required documentation of the technical performance of the high-risk AI system, as well as documentation of the management systems. This means the review can only really start once this documentation is completed, which is otherwise when an AI developer could self-attest as meeting the AI Act requirements. Therefore, the self-attestation process is sure to be faster and more certain (as an independent assessment could come back negatively) than paying for an independent review of the AI system.


When will companies choose independent review by a notified body? A few types of biometric AI systems, such as biometric identification (specifically of more than one person, but less than mass public surveillance) and biometric analysis of personality characteristics (not including sensitive characteristics such as gender, race, citizenship, and others, for which biometric AI is banned) are specially encouraged to undergo independent review by a notified body. However, even this is not required. Similarly, the new rules proposed by Parliament on foundation models require extensive testing, for which a company may, but does not need to, employ independent evaluators. Independent review by notified bodies is never strictly required.


Even without requirements, some companies may still choose to contract with notified bodies for independent evaluations. This offering might be provided by a notified body as one part of a package of compliance, monitoring, and oversight services for AI systems—this general business model can be seen in some existing AI assurance companies. This may be especially likely for larger companies, where regulatory compliance is as important as getting new products to market (this is not often the case for small businesses). Adding another wrinkle, it is possible for the Commission to change the requirements to a category of high-risk AI later. For example, if the Commission finds that self-attestation has been insufficient to hold the market for AI workplace management software to account, the Commission can require this set of AI systems to go through an independent assessment through a notified body. This is a potentially powerful mechanism for holding an industry to account, although it is unclear under what circumstances this authority would be used.


By and large, independent assessment of high-risk AI systems by notified bodies

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Key enforcement issues of the AI Act should lead EU trilogue debate

Key enforcement issues of the AI Act should lead EU trilogue debate

Alex Engler