AI legislation: Where we’re at

Recent developments in generative Artificial Intelligence (AI) have posed new and urgent challenges to creators and their rights that the existing legislative and policy framework is failing to adequately address. We summarise what is happening with AI legislation, both domestically and internationally.

While greater attention has been paid to this issue in recent years, the UK has yet to set out a coherent position on AI regulation. However, as the potential dangers of unregulated AI climbs the news agenda, there appears to be a growing realisation that more must be done to protect rightsholders. We eagerly await to see how the next Government will approach this vitally important issue.


2022 – The proposed copyright exception

Back in 2022, before the launch of ChatGPT and AI became so well known, the Government’s Intellectual Property Office (IPO) held a consultation and attempted to take forward a broad exception to copyright that would allow text and data mining (TDM) of authors’ works for commercial purposes. Copyright law had previously only allowed TDM for non-commercial research. ALCS and other industry leaders campaigned against this, and it was later announced during a debate on AI that the Government would not proceed with this approach.

2023 – AI regulation white paper

To try and develop a way forward, in 2023 the Government published its “AI regulation white paper” which set out initial proposals to develop a “pro-innovation regulatory framework for AI”, including a range of principles to adhere to, such as safety, transparency, fairness, accountability and contestability.

This paper was open to consultation, which ALCS contributed to, making the case that any action taken on AI that relates to authors must ensure that it is utilised as a tool for the creative work of authors, and not as a means to bypass remuneration for their work. Our position is that we must ensure that authors are properly remunerated for the use of their work, as stated in our AI principles.

Development of an AI ‘Code of Practice’

The feedback to the consultation was presented in the Government response. During the consultation and after its response, the Government engaged in stakeholder meetings to try to develop a Code of Practice. Following limited progress this approach was abandoned prior to the announcement of the 2024 election. In the absence of Government action, other voices in Parliament have raised concerns about the need to act on AI, often addressing the need to protect rightsholders.

In January 2024, the Culture, Media and Support Select Committee for the House of Commons called for a clear strategy on AI to safeguard creators and copyright. In response to the Government’s announcement that it would seek to develop a Code of Practice, the committee issued a statement highlighting the common concern that “without a definitive plan of action, we are concerned that AI will continue to pose a threat to creators’ intellectual property… the Government must move quickly to show it is serious about the issue and rebuild trust with the creative industries.”

2024 – Inquiry into Large Language Models and Generative AI

A month later, the House of Lords Communications and Digital Committee conducted an inquiry into large language models and generative artificial intelligence. The report of the inquiry called on the Government to “support copyright”, stating that “the point of copyright is to reward creators for their efforts, prevent others from using works without permission, and incentivise innovation” and stated “we do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process”.

The Artificial Intelligence (Regulation) Bill

One consistent effort to regulate AI in legislation was brought forward by Lord Holmes of Richmond who, near the end of 2023 brought a Private Members’ Bill, the Artificial Intelligence (Regulation) Bill to Parliament.

This Bill intended to establish an ‘AI Authority’ that would work across regulators to address challenges presented by AI and regulatory gaps. Among its main clauses, it addressed copyright and IP abuses. The Bill stated that any person training AI must provide the ‘AI Authority’ a record of data and intellectual property used in training with an assurance that it has been used with informed consent and in compliance with IP and copyright laws. It would have also required anyone providing a product or service involving AI to clearly label that fact to customers.

At the time of writing, the dissolution of Parliament for the July 2024 General Election has ended all legislative business until a new Parliament is elected. It remains to be seen whether this will return, whether as a Private Members Bill, incorporated into the legislative agenda of a new Government, or at all.

2024 – Inquiry into AI governance

In the very last days before Parliament’s dissolution for the election, another committee, this time the House of Commons Science, Innovation and Technology Select Committee, set out the need for the Government to “broker a fair, sustainable solution based around a licensing framework governing the use of copyrighted material to train AI models” in a report of its inquiry into governance of AI. The next Government will have to respond to this recommendation when it is formed.

International landscape

Elsewhere in the world, legislators in the US and EU have taken a more proactive approach to AI legislation.

In 2023, the US Copyright Office launched a wide-reaching consultation on Copyright and AI. While there are multiple cases ongoing to pursue remuneration for use of creators’ copyrighted content in AI training that could lead to significant case law, there have also been efforts to address the impact of AI through legislation.

In April 2024, congressman Adam Schiff introduced a bill, the Generative AI Copyright Disclosure Act, which would require AI developers to announce any copyrighted works used in training datasets to the US Copyright Office in advance of releasing any generative AI systems.

The Artificial Intelligence Act, a European Union regulation, was passed by the European Parliament and approved by the EU council in May 2024. Under this Act, providers of “General Purpose AI” (GPAI) models are obliged to establish a policy to respect EU law, accounting for reservations of rights. Providers of GPAI models will need to provide a detailed overview of the content used to train their model, giving sufficient information to enable copyright holders to enforce their rights.

There remains significant uncertainty in the UK, US and EU as to how any legislation, whether established, in progress or returned to conceptual stages by an election, will actually impact authors. We will continue to work to ensure that authors have appropriate control over their work and that AI is developed with principles of remuneration, transparency and consent for authors in mind.

While this will be a rapidly changing area over the next few months, a few of the major parties have set out their position on AI, although these could develop in more detail. You can read our reaction to the Labour, Liberal Democrat and Conservative manifestos.