The first comprehensive AI law, the EU’s Artificial Intelligence Act 2024/1689 entered into force on 1 August 2024. While the AI Act will be fully applicable from 2 August 2026, some parts will be applicable sooner.
The ban of AI systems posing unacceptable risks will apply from 2 February 2025. Unacceptable risk AI systems are systems considered a threat to people and include:
- Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
- Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
- Biometric identification and categorisation of people
- Real-time and remote biometric identification systems, such as facial recognition.
Codes of practice will apply from 2 May 2025.
Rules on General-Purpose AI (GPAI) systems, including large language generative AI models, need to comply with transparency requirements that will apply from 2 August 2025. The European Commission has launched a consultation on a Code of Practice for providers of GPAI models. This Code will address areas such as transparency, copyright-related rules, and risk management. The Commission expects to finalise the Code of Practice by April 2025.
High-risk systems will have until 2 August 2027 to comply with conformity assessment requirements, including implementing quality and risk management systems. High risk systems include AI systems that are used in products falling under the EU’s product safety legislation, such as toys, aviation, cars, medical devices and lifts. AI systems identified as high-risk are required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity.
AI systems falling into specific areas that will have to be registered in an EU database include:
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law.
Specific transparency risk AI systems, such as chatbots must clearly disclose to users they are interacting with a machine. Certain AI-generated content, including deep fakes, must visibly disclose that the content has been artificially generated or manipulated, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
Minimal risk AI systems, such as AI-enabled recommender systems and spam filters face no obligations under the AI Act due to their minimal risk to citizen’s rights and safety.
The Act has extraterritorial reach, although it does not apply to individuals using AI for personal purposes. The European Commission’s AI Office will be the key implementation body for the AI Act at EU-level, as well as the enforcer for the rules for general-purpose AI models. The penalties for non-compliance are up to €35 million or 7% of global revenue.
Read more about the AI Act here.