Today may be an historic day in the annals of technological regulation: the EU AI Act (the “Act”) has officially entered into force.
It introduces an expansive regulatory framework for Artificial Intelligence (“AI”), taking forward the EU’s stated mission to create “a Europe fit for the Digital Age”, in the footsteps of the GDPR 2016 and the Digital Markets and Digital Services Acts 2022. It has been widely described as a ‘watershed moment’ and a ‘trailblazing’ piece of legislation: but is all this hyperbole justified? How significant will this law really be?
We set out below an overview of the nuts & bolts of the Act, including its scope, key principles, and sanctions. This will be followed next week by a second part setting out the obligations on developers and users of AI and some key legal and practical obstacles for the Act.
Q&A on the Act
Q: Why has the EU published the Act?
A:
- Per the European Parliament1, to make sure AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly; to make sure AI systems have oversight from humans, not just automated systems; and to establish a technology-neutral and uniform definition of AI to be applied to future AI systems.
- Per the European Commission2, to provide AI developers and users with clear regulatory obligations; to reduce the administrative and financial burdens for businesses, especially SMEs; and to support innovation and investment in AI in the EU.
Q: What is AI, under the Act?
A: The Act applies to two primary classes of technology: ‘AI systems’ and ‘General Purpose AI (“GPAI”) models’. These may overlap; a GPAI system is an AI system based on a GPAI model3.
- An ‘AI system’ is defined based on three key criteria – it is (i) a “machine-based system”, which (ii) operates with autonomy, and (iii) makes inferences from input data, based on certain “explicit or implicit objectives”4. This definition has been criticised for being both vague and expansive; technologists are unsure what it will cover, and there will likely be substantial discretion in its implementation.
- A ‘GPAI model’ is defined as an AI model that shows “significant generality” and can “competently [perform] a wide range of distinct tasks”5.
Q: How will AI be regulated?
A: The Act applies a tiered risk-based analysis for AI systems:
- ‘Unacceptable risk’ – those that constitute ‘a threat to people’. They are completely banned, subject only to very limited exceptions for law enforcement. This includes cognitive-behavioural manipulation of vulnerable groups and subliminal techniques to distort human behaviour and circumvent free will6.
- ‘High risk’ – those that negatively affect safety or fundamental rights. They are subject to an onerous regulatory regime, as we will explain in Part 2. This is a very wide category7. It includes AI systems already covered by product safety legislation (such as in toys, aviation, cars, medical devices), used for critical infrastructure (such as digital, road traffic, or utilities), and used by employers for recruitment, termination, and performance evaluation.
- ‘Limited risk’ – those subject to risks associated with a lack of transparency. They must meet transparency and labelling requirements and comply with EU copyright law8. This primarily includes general-purpose and generative AI, such as chatbots and deepfakes.
- ‘Minimal risk’ – any AI systems not falling within the above categories. This may include e.g. AI-enabled video games or spam-filters. Such systems are not subject to any mandatory regulation, but they are encouraged to comply with voluntary AI codes of conduct.
- ‘Systemic risk’ – is a separate category for GPAI models. It applies where the computing power used to train the model is very high (more than 1025 floating point operations or ‘flops’) and/or where the Commission so designates the model9. This is intended to cover the most sophisticated generative AI models, including market-leaders GPT-4 and Google Gemini.
Q: What are the implications for UK businesses?
A: Like much EU regulation before it, the Act has extensive extraterritorial effect.
It applies both (1) where a business supplies an AI system to the EU and (2) where a business’ AI system’s ‘output’ is used or put into service within the EU10. It appears the latter will only apply where such output is intended by the business to be used in the EU11, but the text of the Act itself does not make this clear12. UK businesses may therefore want to err on the side of caution in considering their potential obligations under the Act.
Q: Who is regulated by the Act?
A: The Act applies to four core categories of business13. Most obligations apply to developers (“Providers”), rather than users (“Deployers”):
- Providers – which develop an AI system/GPAI model and place it on the market or put it into service within the EU under their own name or trade mark – whether for payment or for free.
- Deployers – which use an AI system for professional activity.
- Importers – which are located or established in the EU, place an AI system on the market in EU, and bear the name or trademark of an entity outside the EU.
- Distributors – within a supply chain, other than a Provider or Importer, which make an AI system available in the EU.
Q: Are there any exclusions?
A: The Act has exclusions for14:
- Military, defence or national security purposes.
- Third countries or international organisations within a framework of international cooperation or agreements for law enforcement and judicial cooperation within the EU.
- Scientific research & development.
- Research & development and testing prior to use or placing on the market within the EU.
- Personal, non-professional activity.
- AI systems released under free and open-source licences, provided they are not prohibited, high-risk, or subject to transparency requirements.
Q: When will the Act impact businesses?
A: The implementation of the Act is phased as follows:
- 1 August 2024: the Act is in force, 20 days after publication in the Official Journal.
- 2 February 2025: unacceptable-risk AI systems are banned.
- 2 May 2025: voluntary codes of practice for GPAI are issued.
- 2 August 2025: Rules for GPAI models come into force. The EU’s AI governance & enforcement framework enters into operation.
- 2 August 2026: High-risk AI systems are subject to regulatory obligations. All of the Act is in force except obligations relating to high-risk AI systems covered by product safety legislation.
- 2 August 2027: high-risk AI systems covered by EU product safety legislation are subject to regulatory obligations.
- 2 August 2029: the European Commission shall submit its first review of the Act to the European Parliament and the Council, to be repeated every 4 years.
- 2030: grandfathering provisions come into play in respect of high-risk AI systems put in use before relevant dates under the Act.
- 2 August 2030: Providers and Deployers of high-risk AI systems intended to be used by public authorities must comply.
- 31 December 2030: AI systems which are components of certain large-scale IT systems placed on market or put in service before 2 August 2027 must comply.
Q: What sanctions and penalties apply?
A: a maximum fine of €35m or 7% of annual turnover. Three levels of fine apply:
- Violating the ban on unacceptable risk AI systems – up to €35m or 7% of annual turnover.
- Breaking the rules on high-risk AI systems, or intentional or negligent infringements relating to GPAI models – up to €15m or 3% of annual turnover.
- Supplying incorrect, incomplete or misleading information to the relevant AI authorities – up to €7.5m or 1.5% of annual turnover.
So the costs of a failure to comply with the Act are potentially enormous – but the costs of compliance are expected to be very high too. For an optimistic perspective, European think-tank the Centre for European Policy Studies has estimated annual EU-wide compliance costs in the €100s of millions15. By contrast, US think tank the Centre for Data Innovation estimated in 2021 that compliance could cost the EU €31bn and cause a 20% reduction in AI investment over 5 years. Considering the EU’s stated objective of promoting innovation and investment, it will need to be mindful that cost pressures do not undermine business appetite to develop AI within the EU.
How we can help
Rosenblatt has a wealth of dispute resolution experience and is well-placed to support and advise companies and individuals. For enquiries, please contact Partner, Elizabeth Weeks, Associate, Jacques Domican-Bird and Trainee Lawyer, Sam Sykes.
Keep an eye out for Part 2: obligations for users and developers of AI and legal & practical obstacles for the Act.
For further details on Rosenblatt’s Dispute Resolution expertise, please see our website: https://www.rosenblatt-law.co.uk/services/.
- https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence ↩︎
- https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai ↩︎
- Article 3(66) of the Act. ↩︎
- Article 3(1) of the Act. ↩︎
- Article 3(63) of the Act. ↩︎
- See Article 5 of the Act. ↩︎
- See Articles 6-7 and Annex III of the Act for a full list. ↩︎
- See Articles 50 and 53 of the Act. ↩︎
- See Article 3(65), Article 51 and Annex XIII of the Act. ↩︎
- See in particular Recital 22 and Article 2(1)(c) of the Act. ↩︎
- Recital 22 to the Act. ↩︎
- Article 2(1)(c) of the Act. ↩︎
- See Article 2(1) of the Act. ↩︎
- See again Article 2 of the Act, respectively at subsections (3), (4), (6), (8), (10), and (12). ↩︎
- https://op.europa.eu/en/publication-detail/-/publication/55538b70-a638-11eb-9585-01aa75ed71a1 ↩︎