Our previous article, the first in a three-part series on AI regulation, provided an overview of the AI Act and its implications for those in the EU and beyond. In this second article, we explore the approach being taken to regulation within the UK and how differences in approach between the EU and the UK are likely to impact upon the development and use of AI more generally.
The AI Act is the first legislation of its kind and is an all-encompassing and prescriptive body of rules. By comparison, the UK's current approach is considered decidedly laissez-faire and it has opted to leave the management of AI to existing regulatory bodies within existing legal frameworks, at least for now.
While those in the UK will not be entirely free from the reaches of the AI Act, the UK's lighter touch regime might, on a generous interpretation, be said to open up interesting opportunities. The approach does not, however, create much clarity or go any way towards mitigating AI risks.
Pro-Innovation vs Pro-Safety
In the last few years, we have seen the release onto the open market of Chat GPT and many other AI Chatbots, sparking concern and debate around generative AI and general-purpose AI. The AI Act attempts to address all possible risks from AI Systems, including from newer developments. The AI Act is, consequently, extremely wide in coverage.
The AI Act is not completely rigid. Indeed, it factors in potential change in the future, such as variations in GPAI. However, critics have pointed out that the bulk of the provisions will – as is the way with legislation – be difficult to change. Should AI development create unique risks in future, the AI Act may not be capable of adequately addressing the risk.
By comparison, the UK approach is far more flexible. The UK government published a White Paper "A pro-innovation approach to AI regulation" in March 2023, outlining its approach which is "to make the UK a great place to build and use AI that changes our lives for the better." To this end, the White Paper proposes five cross-sectoral principles, and prior to its dissolution on 30 May 2024 ahead of the general election, the UK government was encouraging existing regulators to create non-statutory guidance (steered by the five principles) for the development and use of AI.
Currently, there is no active proposal in the UK for a top-down, comprehensive set of AI regulations. Existing regulators will shoulder the burden within their existing legal remit. This approach has been criticised. A lack of clear rules and a principles-based approach risks a disjointed body of guidance, without any ability to tackle AI’s novel, emerging risks. While ostensibly "pro-innovation", the decentralised approach may also confuse AI developers and users, and, arguably, innovation will not be helped by a lack of certainty. All the while, global AI regulation is advancing and the UK risks falling woefully behind.
Recognising the need for centralized oversight, the government established the AI Safety Institute in January 2024, which will aim to assess AI’s new risks. However, it lacks regulatory power to manage these risks effectively.
Furthermore, a Private Members Bill was introduced in the House of Lords in November 2023, the Artificial Intelligence (Regulation) Bill’. The Bill proposed the creation of an AI Authority, with a range of powers including creation of regulatory sandboxes and AI risk assessment/monitoring. It also proposed that any business developing, deploying or using AI be required to appoint a dedicated AI officer. However, in keeping with the broader UK approach, the Bill did not propose the creation of any power to actually control AI risks effectively, nor grant the AI Authority the power to create secondary legislation to impose such controls. Therefore, while arguably the Bill might aid in unifying UK regulators' approaches it leaves a lot to be desired in terms of tackling AI risks. In any event, the Bill (following its 3rd Reading on 10 May 2024) was sent to the Commons and, now that Parliament has been dissolved, its future is uncertain.
The UK election
As the general election nears, and with Labour currently leading in polls, there could be a change in the approach to AI in store. Labour has indicated it will promptly introduce AI regulation, and it will create a Regulatory Innovation Office to streamline broader tech regulation while still fostering innovation. Earlier this year, Labour also announced plans to publish an AI strategy paper, though we have yet to see this. While this signals a proactive stance toward AI regulation, the extent and (should they win the election) effectiveness of Labour’s potential strategy remains to be seen.
Regulatory sandboxes
Critics of the AI Act say it places burdensome obligations on AI Suppliers, which may have a stifling effect on AI innovation in the EU. The AI Act almost entirely prevents the marketing of AI Systems of limited or high-risk categories unless an AI Supplier can comply with the numerous obligations. The AI Act does envisage the creation of "regulatory sandboxes" to foster innovation and allow AI Suppliers to test AI Systems and ensure compliance before marketing in the wider EU market. However, there are no lighter-touch rules once a tested AI System hits the market. That will force smaller AI Suppliers out of the market altogether, and bigger AI Suppliers may simply opt for a lighter-touch regime.
This creates a potential opportunity for those in the UK, and may conceivably make the UK a tempting proposition for AI Suppliers. They might choose to simply isolate themselves to the UK and avoid the EU altogether. However, that's not an option for those who want to benefit from access to the largest single market in the world (and let's be honest, that is likely to apply to most, if not all, AI Suppliers).
While the EU will have regulatory sandboxes, the UK may provide a better solution as a testing ground for AI Suppliers if AI regulation is not a priority for whichever party wins the general election. This is because they can benefit from lesser regulation while still being able to market and validate their AI Systems on the UK market, later spending the time and resource necessary to bring themselves up to the much higher EU standard, required to market their AI System in the EU. In this way, the UK might just prove to be a far more effective "sandbox" than any of the AI Act's mandated member state equivalents.
Conclusion
The different approaches taken in the EU and UK may result in significantly different outcomes for the development of AI.
The UK's current approach appears to be a "win" for AI innovation over the EU's approach, in much the same way that the U.S expansion of the West was a "win" for settlers; a lawless rush to the frontier, where freedom from regulation allows for rapid growth and exploration, with unpredictable risks, no protection from harm and potentially high numbers of casualties along the way. Emphasis on current though, as the next UK government may well bring some law and order to the AI frontier.
By comparison the AI Act considers safety and risk mitigation paramount, but in doing so creates such a high burden on AI Suppliers it may stifle AI innovation altogether. Time will tell what happens.
Click below to read part one and three of this series.