In August 2024, The Artificial Intelligence (AI) Act came into force across the European Union. Ahead of the anticipated introduced of an AI Bill in the UK, Ed Rea from technology lawyers Arbor Law assesses the changing regulatory landscape and advises businesses on how they can prepare for what lies ahead.
A world first
On August 1, the European Commission’s Artificial Intelligence (AI) Act came into force, marking the first comprehensive AI law in the world. The AI Act applies to every company supplying AI systems within the EU and providing output from an AI system in the EU. Systems and practices are categorised into four levels of risk, with stricter rules applying for higher risks:
- Minimal Risk: AI systems like spam filters and AI-enabled video games face no mandatory obligations under the AI Act. Companies may voluntarily adopt additional codes of conduct.
- Limited Risk: Systems like chatbots must comply with transparency requirements, tailored to the nature of the AI system.
- High Risk: AI systems, including AI-based medical software and recruitment tools, must meet stringent requirements, including robust risk mitigation systems, high-quality datasets, clear user information, and human oversight.
- Unacceptable Risk: The AI Act prohibits certain AI practices, including subliminal techniques, exploitation of vulnerabilities, untargeted web scraping of images for facial identification databases, and emotion inference systems in the workplace or educational institutions. These practices are banned outright.
Under the AI Act, all providers of general-purpose AI (GPAI) will be required to maintain the required technical documentation and information, have a process to respect EU copyright law and make public a detailed summary of their training data to enable copyright holders to see how their work has been used.
The AI Act introduces a transparency requirement for AI systems, including GPAI systems and high-risk or limited-risk AI systems, that are intended to interact directly with humans or generate human-viewed content. This wide-reaching obligation requires providers to ensure individuals are informed when they are interacting with an AI system and that AI-generated output is labelled as such.
The AI Act may lead tech companies to offer different or limited versions of their products in the EU due to potential penalties for regulatory violations. Non-compliance with the AI Act can result in significant fines, with up to €35 million or seven per cent of global turnover (whichever is higher) for breaches related to prohibited AI practices.
How different will the UK be?
On July 17, 2024, the King’s Speech set out the priorities of the new UK Government, led by Sir Keir Starmer. The Speech included plans relating to the development and regulation of AI, as King Charles announced that the new Government, “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”
The Labour Government’s plan and timescale for AI is still unclear, bar the focus on regulating models used for generative AI. However, such vagueness is potentially a significant policy change from the previous Conservative-led government, which aimed to take more of a light touch approach to legislating AI by having existing sector specific regulatory structures identify and address gaps in regulation.
While no specific AI bill was announced as part of the King’s Speech, media reports since then have suggested that it is still on the horizon. According to the Financial Times, senior Labour ministers have met recently with leading technology companies and referenced that an AI bill was on the horizon and would focus, “exclusively on two things: making existing voluntary agreements between companies and the government legally binding, and turning the UK’s new AI Safety Institute into an arm’s length government body.”
The AI Safety Institute (AISI) was launched by the previous government in 2023 as a directorate of the UK Department for Science, Innovation, and Technology. Its aim will be to rigorously research and test AI models for risks and vulnerabilities.
In November 2023 the UK hosted the AI Safety Summit at Bletchley Park, where tech businesses signed an agreement with governments including the UK, US and Singapore. While not legally binding, the agreement enabled governments that were signatories to risk-test new models prior to their release to the market. In May 2024 the UK co-hosted the AI Summit with South Korea, where tech companies made various voluntary commitments relating to safely developing AI.
According to the Financial Times, government officials in the UK want to turn these voluntary agreements into law to, “ensure that companies already signed up to the agreements cannot renege on their obligations if it becomes commercially expedient to do so.”
The FT’s sources believed that the substance of a new AI bill is expected to be unveiled in weeks and a consultation undertaken for approximately two months. The media outlet also claimed that the AISI could help set global standards for AI but that some AI regulation will be addressed outside of the bill, such as using intellectual property to train AI models without payment or permission.
What next for technology businesses?
For companies operating under the remit of the EU’s new regulations, it is advisable to proactively assess the AI Act’s applicability to their operations and implement necessary changes to ensure compliance.
For the UK, there will be an element of uncertainty about the government’s intentions for regulation of AI until any official announcement or consultation launch. The announcement in July of an “AI Opportunities Action Plan” to identify how the new technology can drive economic growth has started some of the process. We can expect further announcements soon. Once published, companies should proactively assess the UK’s AI bill’s applicability to their operations and make any changes required to ensure compliance.
About the author: Ed Rea is a co-founder of Arbor Law and a distinguished commercial technology lawyer recognised for his extensive expertise in technology-related transactions and relationships.
Arbor Law offers expert legal advice to help navigate the complexities of the AI Act, manage AI use in your organisation, and develop effective compliance strategies. Find out more at Arbor.Law