The AI interview: Graham Lovelace
Graham Lovelace is an award-winning journalist, editor and consultant. Today he’s a leading strategist and writer on the impacts of generative AI on human-made media, the ethics and behaviour of the AI companies, the future of copyright in the AI era, and the evolving AI policy landscape. We sat down with him to discuss all things AI, including what happens next and what creators can do to protect their rights.
Could you briefly talk about your background and how you came to focus primarily on generative AI?
I’m a journalist — 42 years and counting, would you believe? The intersection of technology and journalism has been a recurring thread throughout my career. Much of that was spent in the convergence of television and the web. The most prominent example of that was my time as Editor-in-Chief of Teletext Limited, a pre-web-based news and information service accessed on TVs and used by 16 million people every day.
Back then, I thought that the emergence of the internet would be the biggest and most profound impact on the future of media. I was wrong. The biggest impact that is having an effect on all forms of media, from the biggest publishers right down to individual illustrators, authors and photographers, is the impact of generative AI.
It is impacting everyone in society. It’s changing the way we find information and the degree to which we can trust that information. It’s no longer just a case of accessing a trusted source. Generative AI services are now acting as the middleman. They’re delivering answers to our prompts. And now with generative search, you’re not even sure where the information has come from. Is it true? Is it reliable? Is it trustworthy?
I’m a journalist and an editor. I come at this with a constantly curious mindset, constantly asking questions, trying to join the dots and see where this is going to end up. My fear is that if we don’t get the right support from governments, and if we don’t act ourselves as a creative community, then this really is an existential threat to everything we do.
Zooming in on copyright, 2025 was a rollercoaster year for AI regulation. Could you summarise the past year and where we are now?
All governments around the world have faced heavy lobbying from Big Tech. The message has essentially been: make it easier for us to scrape content because we need high-quality material to train our large language models. These models are the powerhouses of the generative revolution. Big tech companies have pushed for a broad copyright exception, allowing them to scrape publicly available information without the risk of being sued for copyright infringement. But under UK copyright law, taking content without consent or compensation is a breach of copyright, even if it’s publicly accessible.
The Government ran a consultation from December 2024 to February 2025. It went into that consultation with a preferred option, allowing AI companies to freely scrape content unless creators actively opted out. We then saw a huge backlash from the creative sector, culminating in the Make It Fair campaign and an unprecedented public response to the consultation.
In the House of Lords creators’ champion Baroness Kidron sought to amend legislation to include transparency protections, so creators would at least know if their work had been scraped. This led to ‘parliamentary ping pong’ between the Lords and Commons, with the Government being defeated five times. Eventually the Lords backed down after extracting concessions, one of which was to conduct an economic assessment on the options presented and report back by March 18. That’s where we are now.
And what do you anticipate happening next?
Ministers have since conceded that they were wrong to go into the consultation with a preferred option, giving the impression that their minds had already been made up. Just after Christmas, we had Liz Kendall, the new Technology Secretary, and Lisa Nandy, Culture Secretary, both appear within Parliament to say that the Government got it wrong and that this was a reset moment. We now have this looming deadline of 18 March for the Government to provide an update on the outcome of their working groups, provide an economic assessment of each of their options and hopefully set out their latest thinking.
The best outcome is the Government categorically rules out the broad exception with opt-out idea that they had going into the consultation, affirms that Britain’s gold standard copyright regime will continue, and bolsters it further by adding those emergency transparency protections, so that creators know whether their content has been scraped. Because at the moment they have no idea.
The worst-case scenario is that ministers continue to kick the can down the road. That they say this is all too big. We’ve had our working groups, but we’re at the very early stages still. We don’t yet have the technology in place to do this properly and it’s far too complicated to come to any sort of decision by March. Let’s give ourselves another year or two. Let’s drag this out for as long as we possibly can. Meanwhile, creators are already hurting and without any intervention, this will only get worse.
We’ve also seen a number of lawsuits against AI companies with mixed results for creators. What are your thoughts and do you see it being a fruitful avenue for forcing them to change their behaviours?
We’ve seen a plethora of lawsuits in the States, where there is a very different approach to copyright under something called the fair use doctrine. The developers of large language models have argued in the States that scraping content on the web to train a large language model is a fair use of creators’ content – both the scraping itself and the acquisition of content through third-party datasets.
The first really big lawsuit to be filed was The New York Times versus OpenAI and Microsoft. The Times argues that its content has been taken (from behind a paywall) without permission or the offer of compensation to create substitutive products that then compete with the original content. That case, which is still ongoing, set the ball rolling for many other cases we’ve since seen, such as those against Meta and Anthropic.
The big-tech companies are certainly scared. We know that because they’re lobbying the Trump administration to declare that the training of their large language models is fair use and to give them a ‘get out of jail free’ card. Because if any one of these class actions actually demonstrates that a wilful infringement has occurred, then the damages for that in the US can run up to $150,000. So if you’ve got millions of books, millions times $150,000, this is where you get to damages exceeding $1 trillion. Even if this is knocked down to the realm of tens of billions of dollars, even for players like OpenAI, and even for Microsoft, Amazon and Google, that is a huge deal.
Do you see licensing as a means of facilitating access to training data in a fair and transparent way that respects creators’ rights?
I hope so, and I think it could happen in two ways. The first is that the big-tech companies are forced to do it through new legislation or simply being reminded of existing requirements. The Government could state that if you want to train your models on creators’ content, you need a licence, which is essentially the status quo. Despite the hand-wringing, we have not seen ministers say this yet with absolute clarity and firmness.
The other way this happens is voluntarily as the big-techs manage their reputations and public perception. We’ve seen the public become increasingly wary of this technology. All it takes is for one or two of the major players to go down this route, for the rest to not want to be left behind.
Interestingly, Microsoft announced recently that it’s looking at a creative content marketplace. And the language it used was actually quite promising. Microsoft described it as a mediated marketplace where creatives and publishers can place content, and say how they want their content to be used. I think Microsoft and others are finally beginning to realise that if they continue as they are, what will start to happen is more and more content going behind paywalls, and more creative content from smaller players never actually making the web at all. And that will result in the open web becoming a much-depleted online experience.
Are there any other countries or jurisdictions that the UK can learn from in how they’ve approached AI and creators’ rights?
I would point immediately to Australia. Australia had their own equivalent debate, with very similar language — again, the result of extremely similar lobbying. Last year, Australia’s Productivity Commission came out with a report that floated the option of going down the route of a text and data mining exception.
There was a huge and immediate kickback from the creative community in Australia. They had had a year to prepare, and they’d seen what had happened in the UK, so were able to galvanise very quickly. They told the Albanese government in no uncertain terms that this was unacceptable. They argued that this was about their culture, their learning, their society. That if they went down this route, they would be a diminished nation.
Shortly afterwards, the government there effectively ruled out going down the route of a text and data mining exception. They reaffirmed that copyright is copyright and that it would be maintained. Ministers are still looking at other aspects of the issue, so it’s not completely over, but that is the jurisdiction where I’ve seen real clarity rather than this wishy-washy language about balancing the interests of big-tech companies and creatives — which is what Liz Kendall continues to say, and what her predecessor made almost a full-time job of parroting.
You will hear that other jurisdictions like Japan and Singapore have a looser approach and allow wholesale scraping. But even there, the legislation contains wording that the content must be legally acquired. It is not a complete free-for-all. Big Tech often says we should be more like Singapore or Japan. But it’s not nearly as loose as the Big Tech community sometimes suggests.
What do you think creatives and the creative industry in general should be doing to assert their rights?
There are a few things that everybody — whether you’re an individual author, a small independent publisher, or a major publisher — needs to do. The first is to understand copyright. You need to be au fait with the language. You need to understand that it’s an automatic protection. That’s why the opt-out proposal is such a problem, because it undermines that automatic protection by requiring you to do something.
You also need to understand that if people are reading your article elsewhere, copyright differs from country to country, but there are international agreements and treaties. There are basic principles that all countries adhere to. It’s a form of intellectual property. It’s a right. There may be differences in duration between countries, but fundamentally it’s a crucial protection for creators. We need to keep that front of mind in all lobbying.
So understand copyright and assert your copyright. Authors should state clearly in anything they write, and publishers on the imprint page of every book, that the work must not be used for AI model training. They may well do it anyway, but unless you assert your position at the front of your book, you’ve left nothing on record. After the event, you need to be able to say, “We told you we did not consent to this”. Authors also need to check their agreements with publishers. Ask them what conversations they’re having with AI developers. What contracts might be coming down the line? I would urge publishers to be open, be transparent, and be honest.
The second thing is to lobby MPs and the Government. Use the law that exists, but also lobby for strengthened copyright, particularly transparency protections. We need to know what is going into these models, in granular detail. And we should not accept the argument that full transparency would somehow reveal commercially sensitive information. They know what they’ve scraped. They know it in intricate detail. Journalists like Alex Reisner at The Atlantic have been able to work it out — just Google ‘The Atlantic AI Watchdog’ and see if your book has been used — so of course the companies know.
Finally, we need a public information campaign. We need a narrative strong enough to counter Big Tech. Politicians will not act unless they feel pressure. This has to be a bottom-up, author-led, creative-led movement. We are writers, for God’s sake! We should be good at this. We need to articulate clearly that a serious injustice has occurred and is still occurring. For every hour that creators lack transparency protections, AI developers will continue training on their content.
If nothing is done, we will have a less rich, less interesting culture. Content will retreat from the open web. Knowledge will be diminished. The web will become less useful than it is or could be. This injustice needs righting. Unless we create a counter-argument strong and powerful enough to challenge the nonsense spouted by Big Tech, we won’t get anywhere.
You can keep up with the latest developments around generative AI on Graham’s newsletter, Charting Gen AI.