UK government drops AI copyright opt-out plan after backlash
After pushback from high-profile artists and music industry giants, the UK government has U-turned on its AI training plan.
The UK government has officially abandoned its controversial plan to change how copyrighted material can be used in AI training, following months of backlash from across the creative industries.
As Complete Music Update reports, the proposal centred on a “text and data mining” (TDM) exception. This would have allowed AI companies to train their models on existing content, including music, without permission, unless rights holders actively opted out. Critics argued this system would weaken copyright protections and place an unfair burden on creators.
That approach has now been dropped. BBC News shares that technology Secretary Liz Kendall confirmed the shift, stating, “We have listened.” The government has also made it clear that it no longer supports the opt-out model and currently has “no preferred option” for what comes next.
This marks a major reversal from its earlier position, which had already sparked concern across the music industry and beyond. The government’s original opt-out proposal raised serious questions around ownership and control.
Despite stepping back, the government has not replaced the plan with a clear alternative. Instead, it will continue consulting while exploring other areas, including licensing frameworks, transparency rules, and how AI-generated content should be labelled. Officials have also pointed to new areas of focus, such as AI-generated “digital replicas”, also known as “deepfakes”. This is becoming a growing concern, particularly in music.
Recent figures highlight the scale of the issue. Sony Music has reportedly asked streaming platforms to remove more than 135,000 tracks featuring unauthorised AI voice clones of its artists. According to Sony’s Dennis Kooker, these recordings can cause “direct commercial harm to legitimate recording artists” and may even damage release campaigns.
This wider context shows that the debate is no longer just about copyright law. It now includes deeper questions around identity, consent, and how artists are represented in an AI-driven world.
At the same time, legal battles are intensifying globally. In the US, music company BMG has filed a lawsuit against AI firm Anthropic, accusing it of using copyrighted lyrics without permission, as CMU reports. The lawsuit claims, “Anthropic’s infringement of BMG’s lyrics causes significant and irreparable harm to BMG and the songwriters it represents.” It adds that AI firms are profiting from creative work “without any compensation or acknowledgment”.
In the UK, the government faces a difficult balancing act. It recognises that the creative industries are a “world-leading national asset”, while also acknowledging the rapid growth of AI, which is expanding “23 times faster than the rest of the economy.” However, its own findings admit there is “no consensus on how these objectives should be achieved.”
This leaves the industry in a state of uncertainty. While many have welcomed the removal of the opt-out plan, there are concerns that similar proposals could return in a different form. AI expert and “Fairly Trained” campaigner, Ed Newton-Rex has warned that “weakening copyright law is very much still on the table”, while industry leaders are urging the government to rule out any future exceptions that could harm creators.
This ongoing debate has been building for some time. From parliamentary discussions on strengthening copyright protections to wider industry resistance and protests. The thing that has remained consistent is that creators want control, transparency, and fair payment when their work is used.
For now, the government’s U-turn offers more of a pause rather than a solution. More consultations, reviews, and discussions are expected before any new policy is introduced. Whatever happens next will be crucial, and any decisions will shape how music, content and creativity are protected in the age of AI.