In a move to protect its artist, Sony has requested AI deepfakes to be taken off streaming platforms.

Sony Music has requested the removal of more than 135,000 tracks that impersonate its artists using AI-generated “deepfake” technology. The recordings appeared on streaming platforms and falsely claimed to feature well-known acts including Beyoncé, Queen, and Harry Styles, according to a report by BBC News.

These tracks are created using generative AI tools that can replicate an artist’s voice and style. Sony says this type of content is causing “direct commercial harm to legitimate recording artists” and is becoming more common as the technology becomes cheaper and easier to use.

The company believes the number of fake tracks identified so far is only a portion of what is currently available online. Since March 2025 alone, it has flagged around 60,000 songs that falsely claim to feature artists from its roster. Other artists reported to be affected include Bad Bunny, Miley Cyrus, and Mark Ronson.

Dennis Kooker, President of Sony’s Global Digital Business, explained how these uploads can impact releases. “In the worst cases, [the deepfakes] potentially damage a release campaign or tarnish the reputation of an artist.” Kooker also noted, “The problem with deepfakes are they are a demand-driven event. They are taking advantage of the fact an artist is out there promoting their music. That is when deepfakes are at their worst – building off and benefiting from the demand the artist has created [and] ultimately detracting from what the artist is trying to accomplish.”

Digital Music News also highlights similar incidents across the industry, including fake releases appearing alongside legitimate albums or even being uploaded to official artist pages.

The issue sits alongside a wider concern around streaming manipulation. This involves uploading content, sometimes AI-generated, and artificially boosting play counts to generate royalty payments. According to the BBC, the music industry estimates that up to 10% of content on streaming platforms could be fraudulent, with AI increasing the scale of the problem.

Discussions around AI regulation are ongoing. Industry figures welcomed signs that the UK government is reconsidering proposals that would have allowed AI companies to train on copyrighted material without permission. IFPI CEO Victoria Oakley said, “I think we’ve seen a lot of governments really grappling with this issue because they are trying to square a circle: They are trying to protect creativity and at the same time encourage innovation.” She added, “I’m very optimistic that… in the UK, they [have] decided to pause and think again.”

There are also calls for clearer labelling of AI-generated music on streaming platforms. Oakley said, “The challenge of identifying and labelling AI material is absolutely the next critical challenge.” Some platforms have started doing this, tackling a problem that Oakley describes as “very simple to fix”.

Deezer has introduced tools to detect and tag AI-generated tracks. Other platforms are also looking at transparency measures, with Apple recently introducing Transparency Tags that rely on labels and distributors to add to content.

The importance of transparency for listeners was emphasised by Kooker, “Without proper identification, fans can’t distinguish between genuine human creativity versus unauthorised, AI-generated content, which risks creating confusion, undermining trust, and impacting user experiences.” He added, “Transparency shouldn’t be optional, it’s the foundation of a fair and sustainable music ecosystem.”

As the tide of AI content continues to rise rapidly, how platforms manage and label this material will remain a key issue across the industry. Clear identification and consistent monitoring will be crucial to protect artists’ work and maintain trust in streaming services.


Distribute your music to all leading streaming platforms like Spotify and Deezer with RouteNote. Sign up today to get started.