By
on1. Background: three years of legislative debate
Today, on July 12, 2024, EU Regulation No. 1689/2024 laying down harmonized rules on Artificial Intelligence (“Regulation” or “AI Act”) was finally published in the EU Official Journal and will enter into force on August 1, 2024. This milestone is the culmination of three years of legislative debate since the EU Commission’s first proposal for a comprehensive EU regulation on AI in April 2021. [1]
The Regulation represents the first comprehensive legislation on AI, designed to promote the development and adoption of a human-centric and trustworthy AI ecosystem while ensuring a high level of protection for health, safety and fundamental rights in the EU.
Compared to the EU Commission’s proposal, the AI Act now features a revised definition of “AI systems”. [2] This is aligned with the recent definition proposed by the OECD[3], whereby an “AI system” is described as a machine-based system that – for explicit or implicit objectives – infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Moreover, this revised definition mirrors the GDPR’s approach in that it is technology-neutral and adaptable to future AI developments. It excludes less complex software, as well as programming methods and systems based solely on rules set by natural persons to perform automated tasks.
2. Entry into force: the transitional period and the launch of the AI Pact
The Regulation will generally become fully applicable after a two-year transitional period, although certain obligations will take effect at an earlier or later time. Among others, the ban on prohibited practices shall apply after 6 months and the rules on general purpose AI (“GPAI”) models shall apply after 12 months. A longer period of 36 months applies with respect to GPAI models that have been placed on the market within 12 months from the AI Act’s entry into force and high-risk AI systems covered by EU harmonization legislation listed in Annex I (see Section 4(b) below).[4]
To bridge this transitional period, the EU Commission has launched the AI Pact initiative, encouraging early compliance with the new Regulation. This initiative allows organizations to align with the evolving rules through commitment statements and implementation plans.[5]
3. Scope of the Regulation: a broad application of the EU legal framework
The legal framework will apply to both public and private actors – whether within or outside the EU – as long as the AI system or model is placed on the market or put into service in the EU, or the output produced by the AI system is used in the EU.[6] The territorial scope is, therefore, exceptionally broad and could potentially capture many international organizations with only a tangential connection to the EU.
In fact, the Regulation applies to: (i) providers[7] of AI systems placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, irrespectively of their place of establishment or location; (ii) importers and distributors of AI systems; (iii) product manufacturers placing on the market or putting an AI system together with their product and under their own trademark; (iv) deployers[8] of AI systems located in the EU; and (v) authorized representatives of providers, which are not established in the EU, and affected persons that are located in the EU.[9]
The AI Act also applies to providers and deployers of AI systems located in a third country where the outputs produced by the AI systems are used in the EU.
Excluded from the scope of the Regulation are, among others, AI technologies used exclusively for military, defence or national security purposes (which fall within the competence of Member States), scientific research and development, as well as purely personal non-professional activities.
Moreover, the final text of the Regulation clarifies that AI systems released under free and open-source licenses fall outside of the scope of the AI Act unless they: (i) are placed on the market or put into service as high-risk systems; or (ii) fall within the application of Articles 5 and 50 of the AI Act (i.e., when the AI systems are considered prohibited practices or fall into those which are subject to transparency obligations). [10] GPAI models released under a free and open-source licence are also not subject to certain obligations with respect to GPAI models, unless they present systemic risks (see Section 7 below).
4. Prohibited Artificial Intelligence Practices
Certain AI practices which include, among others, social scoring, cognitive behavioural manipulation and emotion recognition systems in the context of employment or education are prohibited outright under the Regulation.
5. High-risk AI systems
High-risk AI systems, while permitted, are subject to stringent obligations relating to risk management system, data governance, technical documentation, transparency, registration and record-keeping requirements and human oversight, as well as accuracy, robustness and cybersecurity.
AI systems are considered high-risk if they pose a “significant risk” to an individual’s health, safety, or fundamental rights, and, in particular, if: (i) they are intended to be used as a product or as a safety component of a product covered by EU harmonization legislation listed in Annex I (e.g., medical devices, industrial machinery, toys, aircraft, and cars) and the product is required to undergo a third-party conformity assessment under the above-mentioned legislation; or (ii) they are used in certain contexts listed in Annex III (e.g., AI systems used for education, employment, critical infrastructure, essential services, law enforcement, border control, and administration of justice).
Building on the Commission’s proposal, the Regulation now provides for a series of exemptions[11] that would allow providers of AI systems to avoid the obligations applicable to high-risk AI systems based on self-assessment (e.g., when the AI system is only intended to perform a narrow procedural task or to improve the result of a previously completed human activity). However, AI systems referred to in Annex III shall always be considered to be high-risk where they perform profiling of natural persons.
Providers shall document the reasoning of their decisions and the EU Commission is expected to develop guidelines to help with this assessment.
6. Transparency-related obligations
The AI Act imposes transparency-related obligations with respect to certain AI systems which by their nature raise concerns associated with a lack of transparency. Under the Regulation, those AI systems include, among others, AI systems directly interacting with people, AI systems generating synthetic audio, image, video or text content, emotion recognition systems, biometric categorization systems, and systems used for creating deep fakes and would be subject to additional obligations, mostly of an informative nature. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with an AI system so they can take an informed decision to continue or step back.
7. Specific obligations on GPAI models
After prolonged debate, the AI Act now contains specific obligations on GPAI models. A GPAI model is defined as an: “AI model […] that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications”. GPAI models which are used for research, development or prototyping activities before they are placed on the market are excluded from this definition.
Providers of GPAI models are subject to specific obligations, such as to maintain technical documentation of the model, provide detailed information and documentation to providers that integrate these models into their AI systems (“downstream providers”), put in place a policy to comply with Union law on copyright and related rights and makeg publicly available a “sufficiently” detailed summary of the content used for training the GPAI model, based on a template provided by the AI Office.[12] This summary should list the source of the data used for training the model, such as large private or public databases or data archives. By way of exception, the obligations to maintain technical documentation of the model and provide detailed information and documentation to downstream providers does not apply to providers of GPAI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available, unless the GPAI models pose a “systematic risk”.[13]
In light of the issues arising from the interplay of generative AI and copyright law, the AI Act also requires that providers of GPAI models put in place a policy to ensure compliance with EU copyright law. This policy needs to include, in particular, the provider’s commitment to respect any express “opt out” declaration by a copyright holder that their work may not be used for the purposes of text and data mining (Art. 4(3) of the EU’s Digital Single Market Directive 2019/790). A recital clarifies that any provider placing a GPAI model on the EU market should comply with this obligation, “regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of those general-purpose AI models take place.” According to the recital, this is “necessary to ensure a level playing field among providers of general-purpose AI models where no provider should be able to gain a competitive advantage in the Union market by applying lower copyright standards than those provided in the Union.”[14]
If a GPAI model poses a “systemic risk”, it is subject to additional requirements concerning model evaluation, systemic risk assessment and mitigation, cybersecurity, energy efficiency, and major incident reporting to the EU Commission. This classification as a GPAI model with systemic risk is based either on the high-impact capabilities of the model (evaluated on the basis of appropriate technical tools and methodologies), or on an individual designation decision of the EU Commission that takes into account criteria listed in Annex XIII (e.g. the number of parameters, quality and size of the dataset, input and output modalities or the reach measures in business users). A GPAI model is presumed to have high-impact capabilities if it meets a threshold of 10^25 of the cumulative amount of compute used for its training, as measured in floating point operations (“FLOPs”). These thresholds and criteria may be amended and/or supplemented by the Commission in light of evolving technological developments to reflect the state of the art.[15]
8. Notable additional obligations/exceptions and interplay with the GDPR
a) Fundamental Rights Impact Assessment (“FRIA”)
The Regulation places special emphasis on the protection of the fundamental rights of individuals by requiring public institutions and private organizations that provide services to the community (such as education, health care, accommodations, social services, and life and health insurance entities) to conduct a FRIA prior to deploying a high-risk AI system.[16]
The assessment must include, among other things, a description of the process in which the high-risk AI system is to be used, its duration and frequency, the categories of natural persons affected, the specific risks of harm, the implementation of the measures taken in the event that these risks materialize, and a description of the implementation of human oversight measures.
However, a FRIA will not have to be carried out for aspects covered by other legal obligations, such as the Data Protection Impact Assessment under the GDPR.
For this obligation as well, the AI Office is expected to develop a questionnaire to facilitate the implementation of the obligation.
b) Mitigation and testing of bias in AI systems
Biases in AI systems, which can lead to unfair and harmful outcomes, are a major concern in the rapidly evolving world of AI.
The detection of these biases relies heavily on sensitive data – essential for developing and validating bias detection methods – since research on fairness and bias detection might otherwise remain theoretical and disconnected from practical applications.
Accordingly, the AI Act includes a provision that providers of high-risk AI systems may exceptionally process special categories of personal data, as a matter of substantial public interest within the meaning of Article 9(2)(g) of the GDPR and Article 10(2)(g) of Regulation (EU) 2018/1725, but only to the extent that doing so is strictly necessary for ensuring bias detection and correction.[17] This processing is subject to stringent safeguards to protect individuals’ fundamental rights and freedoms (e.g., the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymized data, and the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the end of its retention period, whichever comes first).
This ensures that AI systems can be scrutinized and corrected for fairness while upholding strong privacy protections. This balance is crucial for advancing AI technologies responsibly and ethically.
Additionally, the AI Act contains certain obligations on providers of high-risk AI systems and GPAI models which relate to bias testing. With respect to high-risk AI systems, in particular, such obligations include the following: testing AI systems in order to identify targeted risk management measures[18], subjecting training, validation and testing data to data governance and management practices in order to detect, prevent and mitigate possible biases[19], provision of information to deployers (including the level of accuracy, metrics, robustness and cybersecurity against which the high-risk AI system has been tested and validated)[20], developing the AI systems in a way to eliminate or reduce the risk of possibly biased outputs influencing input for future operations (feedback loops)[21], fundamental rights impact assessments (see Section 6(a)), maintenance of technical documentation (including the validating and testing procedures used)[22], and putting in place a quality management system.[23]
Further, certain testing-related obligations are placed on providers of GPAI models, including to draw up and maintain “technical documentation of the models, including its training and testing process and the results of its evaluation”.[24] Additionally, providers of GPAI models with systemic risk would also have to perform model evaluation (including adversarial testing) in accordance with standardised protocols and tools reflecting the state of the art, assess and mitigate possible systemic risks at EU level, keep track of, document and report information about serious incidents to the AI office, and ensure an adequate level of cybersecurity protection.[25]
9. Penalties
Violations of the AI Act will result in fines, similar to the GDPR, based on the company’s total worldwide annual turnover from the previous financial year. These penalties are designed to be effective, proportionate, and dissuasive, while also taking into account the economic viability of small and medium-sized enterprises (SMEs) and startups.
Under the AI Act, fines are structured as follows:
(i) € 35 million or 7% of the global annual turnover (whichever is higher) for infringements on prohibited practices;
(ii) € 15 million or 3% of the global annual turnover (whichever is higher) for noncompliance with any of the other requirements or obligations of the Regulation, including infringement of the rules on GPAl models; and
(iii) € 7.5 million or 1% of the global annual turnover (whichever is higher) for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.
10. Conclusion
The Regulation will have a significant impact on businesses in the EU and elsewhere. Given the broad scope of application of the AI Act’s provisions, as well as the risk of hefty fines for non-compliance, businesses should promptly consider the application of relevant measures under the Regulation to their AI systems and models and start taking the necessary steps to ensure compliance.
[1] Please see our post on “Agreement reached on the EU AI Act: the key points to know about the political deal” at https://www.clearyiptechinsights.com/2023/12/agreement-reached-on-the-eu-ai-act-the-key-points-to-know-about-the-political-deal/#_ftn13.
[2] According to Article 3(1) of the AI Act, an “AI system” is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
[3] See https://oecd.ai/en/wonk/ai-system-definition-update.
[4] See Articles 111 and 113 of the AI Act.
[5] See EU Commission, AI Pact athttps://digital-strategy.ec.europa.eu/en/policies/ai-pact. It was reported, however, that workshops on a first draft of the voluntary commitments have been delayed, and the AI Pact “has gradually shifted from promoting early compliance to providing a platform for AI companies to present compliance plans and strategies in advance” – see MLex, “EU initiative to anticipate compliance with upcoming AI rules struggles to take off” (July 8, 2024).
[6] See Article 2 of the AI Act.
[7] According to Article 3(3) of the AI Act a “provider” is the “natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.” Under Article 25, any distributor, importer, deployer or other third-party will be considered a provider if they (i) put their name/trademark on high-risk AI system already on the market or put into service, (ii) make a substantial modification to the high-risk AI system, or (iii) modify the intended purpose of an AI system, including a general purpose AI system, so it becomes high-risk.
[8] According to Article 3(4) of the AI Act a “deployer” is the “natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”
[9] See Article 2(1) of the AI Act.
[10] See Article 2(12) of the AI Act.
[11] Listed in Article 6(3) of the AI Act.
[12] According to Article 3(47), “AI Office” means the EU “Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024.”
[13] See Article 53(2) of the AI Act.
[14] See Recital 106 of the AI Act.
[15] See Article 51 of the AI Act.
[16] See Article 27 of the AI Act.
[17] Article 10 of the AI Act.
[18] Article 9 and 60 of the AI Act.
[19] Article 10 and Recitals 67 and 70 of the AI Act.
[20] Article 14 and Recital 75 of the AI Act.
[21] Article 15 of the AI Act.
[22] Article 11 of the AI Act.
[23] Article 17 of the AI Act.
[24] This technical documentation shall contain, among other things, “information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies (e.g. cleaning, filtering etc), the number of data points, their scope and main characteristics; how the data was obtained and selected as well as all other measures to detect the unsuitability of data sources and methods to detect identifiable biases, where applicable”. See Article 53 of the AI Act.
[25] Article 55 of the AI Act.
Source LexBlog https://www.lexblog.com/2024/07/12/the-ai-act-has-been-published-in-the-eu-official-journal/