The law provides examples of systems presenting unacceptable risk. Systems falling into this category are prohibited. Examples include the use of real-time remote biometric identification in public spaces or social rating systems, as well as the use of subliminal influence techniques that exploit the vulnerabilities of specific groups.
High-risk systems are permitted but must comply with several requirements and undergo a conformity assessment. This assessment must be carried out before the systems are placed on the market. These systems must also be registered in an EU database which will be created. Operating high-risk AI systems requires an appropriate AI risk management system, logging capabilities and human monitoring, respectively ownership. There must be appropriate data governance applied to data used for training, testing and validation, as well as controls ensuring the cybersecurity, robustness and fairness of the system.
Examples of high-risk systems are those related to the operation of critical infrastructure, systems used in hiring or employee evaluation processes, credit scoring systems, automated insurance claims processing, or setting risk premiums for clients.
The remaining systems are considered limited or minimal risk. For these, transparency is required, i.e. a user must be informed that what they are interacting with is generated by AI. Examples include chatbots or deep fakes which are not considered high risk but for which it is mandatory for users to know that AI is behind it.
For all AI system operators, the implementation of a Code of Conduct around ethical AI is recommended. Notably, general purpose AI (GPAI) models, including core models and generative AI systems, follow a distinct classification framework. The AI Act takes a tiered approach to compliance obligations, differentiating high-impact and systemic risk GPAI models from other GPAI models.
Step 3: Prepare and Prepare
If you are a supplier, deployer, importer, distributor or data subject of AI systems, you must ensure that your AI practices comply with this new artificial intelligence regulation. To start the process of full AI law compliance, you must initiate the following steps: (1) assess the risks associated with your AI systems, (2) raise awareness, (3) design ethical systems, ( 4) assign responsibilities, (5) stay current, and (6) establish formal governance. By taking proactive steps now, you can avoid potential significant penalties for your organization when the Act comes into force.
The AI law is expected to come into force in the second and third quarter of 2024 after its publication in the Official Journal of the European Union. Compliance transition periods will then be imposed, with companies having 6 months to comply with requirements for prohibited AI systems, 12 months for certain general purpose AI requirements, and 24 months to fully comply. to the legislation.
What are the penalties for non-compliance?
The penalties for non-compliance with AI law are significant and can have serious consequences for the provider or deployer’s business. They vary from 7.5 to 35 million euros, or 1 to 7% of the overall annual turnover, depending on the seriousness of the offense. It is therefore essential that stakeholders ensure that they fully understand the AI law and comply with its provisions.
How does the law impact the financial services sector?
Financial services has been identified as one of the sectors where AI could have the most significant impact. The EU AI law contains a three-tier risk classification model that categorizes AI systems based on the level of risk they pose to users’ fundamental rights and security. The financial industry uses a multitude of data-driven models and processes that will rely more heavily on AI in the future. AI processes and systems used for creditworthiness assessment or risk assessment of customers’ AI premiums fall into the high-risk category under the AI Act. Additionally, AI systems used to operate and maintain financial infrastructure considered critical also fall within the scope of high-risk AI systems, as do AI systems used for biometric identification and the categorization of natural persons or for the employment and management of employees.