The fast pace of AI development and use has caused difficulties for governments and regulators alike. The EU has therefore sought to trailblaze with the introduction of its EU Artificial Intelligence Act (The "AI Act"), a first of its kind standalone law governing AI.
The AI Act is expected to have a significant impact on AI development, including a comprehensive set of obligations for all those involved in the AI supply chain, backed up with the threat of hefty fines for non-compliance. The impact of the AI Act will be felt far beyond the borders of the EU. For example, AI suppliers both operating in or marketing to the EU will need to comply.
In this three-part series of articles, we will examine the AI Act and its impact, compare the approaches being taken by the EU and the UK (including the potential impact of an imminent change in UK government), and finally focus in on facial recognition software by way of a case study.
AI Act: An overview
On 13 March 2024, the European Parliament formally adopted the AI Act. This was significant, given that it is the world's first horizontal and standalone law governing AI. At 458 pages long, its 113 Articles and 13 Annexures are (to put it lightly) comprehensive. The AI Act will come into effect in stages, with the first provisions coming into effect six months after adoption.
The AI Act's goal, clearly stated at the start of the Act itself, is to "promote the uptake of human centric and trustworthy artificial intelligence… while ensuring a high level of protection of the health, safety, fundamental rights… in the Charter of Fundamental Rights of the European Union".
To achieve this, the AI Act categorises all AI Systems into four levels of risk, then - for each level of risk – imposes a number of obligations upon those involved in the AI System supply chain. 'AI Systems' are defined very broadly in the AI Act so as to cover all manner of machine-based systems operating with some degree of autonomy and capable of generating outputs.
The entities/persons facing obligations under the AI Act are grouped into four main categories: AI "providers", "deployers", "distributors" and "importers" (in this series of articles we refer to these four categories collectively as the "Suppliers" of AI Systems). The extent of the obligations falling upon the shoulders of these Suppliers will depend on the category they fall into. So, for example, providers (these would include AI System developers) bear the highest level of responsibility.
The higher the risk posed by the AI system, the stricter the duties and obligations imposed upon the Suppliers. Briefly, the risk categories and obligations are outlined below:
- Unacceptable-risk AI Systems: AI Systems in this category are banned outright in the EU, aside from some very narrow exceptions for law enforcement. These types of systems either intentionally or inadvertently cause harm to people or their fundamental rights.
- High-Risk: This category covers AI systems intended for use in an exhaustive range of circumstances where, if inadequate risk-management measures are in place, there is a high-risk of harm, for example AI systems used in critical infrastructure. Suppliers of high-risk AI Systems must comply with a number of measures aimed at minimising the risks. These include: risk identification and mitigation, human oversight, the use of only high-quality training data for the training of AI Systems, the creation of information to enable users to best understand how AI Systems work, careful activity logging, and high standards of robustness, security and accuracy.
- Limited Risk: These are AI Systems which are not inherently risky, but may pose a risk owing to a lack of transparency. Examples include AI chatbots, deepfake technology, and audio and visual generative AI, where if users are unaware the output is in fact AI generated this could itself cause harm. Limited Risk AI System Suppliers must take steps to ensure transparency, for example by ensuring AI generated images are marked as such.
- Minimal or no risk: This category is for other AI systems which do not fall into the above categories, this includes systems such as AI-enabled video games, or spam filters. There are no requirements attached to this risk category.
The AI Act prescribes a separate body of rules for Suppliers of general-purpose AI Models ("GPAI"), again using a system of categorisation and placing the highest number and most onerous of requirements on Suppliers of AI Systems which utilise the highest-risk GPAI. The GPAI definition only applies to the most advanced AI models (Chat GPT 4's AI model, for example, does not meet the computational threshold to be classed as GPAI), though this definition may be revised in future.
The AI Act also seeks to promote and encourage trustworthy AI innovation by requiring each EU Member State to establish at least one regulatory sandbox for the development of AI. This is important, given one of the key criticisms of the AI Act, to be explored further in the next article in this series.
Impact in the UK
The AI Act, as it is EU legislation, will not apply directly to the UK. However, it will almost certainly impact on Suppliers – or users – based in the UK. Under the AI Act, any entity based outside of the EU which intends to market an AI System in the single European market must comply with the AI Act. Similarly, Suppliers based in the EU – even if the AI System is marketed solely outside of the EU – must ensure they comply with the AI Act (the AI Act is not the only EU legislation to have this indirect effect outside the EU, this has become known as the "Brussels Effect").
Conclusion
With the AI Act, the EU has taken a comprehensive approach to the regulation of AI, requiring Suppliers to follow strict requirements with the threat of hefty fines for failure to comply (at maximum, this could be the highest of 3% of worldwide turnover or €15million). It will have a huge impact beyond EU member states – including in the UK, even though the AI Act is not directly applicable.
In the next article, we examine the stark differences between the EU and UK's current approach to AI regulation, and why the AI legal landscape will be especially complex for those based in the UK, given that and by comparison the current government's approach is considered decidedly "laissez-faire" and having been positioned as "pro-innovation". In addition, with a general election on the horizon, there is some uncertainty surrounding the future of AI regulation in the UK. Labour (who are currently leading in the polls) have made promises of a firmer approach should they win the election.
Click below to read part two and three of this series.