YouTube has updated its AI likeness detection tool to protect more high profile people against deepfakes.

YouTube has announced that it is expanding its AI likeness detection technology to celebrities and the wider entertainment industry, giving more people the ability to track and act on deepfake content that uses their image without permission.

The update means talent agencies, management companies, and the people they represent can now access the system, which scans for AI-generated videos that replicate a person’s face. Once flagged, users can choose whether to request removal, file a copyright claim, or leave the content in place.

YouTube explained that “Likeness detection works similarly to Content ID: it looks for AI-generated content with a participant’s likeness, like a deepfake of their face, and gives them the power to find it and request removal.” The comparison to Content ID is key, as it positions the tool as a familiar rights management system, but adapted for the growing challenge of AI-generated visuals.

The rollout builds on earlier phases of testing. As TechCrunch notes, the feature was first introduced to a limited group of creators in a pilot programme last year, before expanding to politicians, government officials, and journalists earlier in 2026. Now, it is being extended to the entertainment industry, marking its most significant expansion so far.

The tool has also been developed in collaboration with major industry players. YouTube says it has worked closely with agencies including CAA, UTA, WME, and Untitled Management to refine how the system works in practice. “With support from leading talent agencies and management companies… we’ve worked to refine how likeness detection can best serve talent,” the company said.

One notable aspect of the feature is that users do not need to run a YouTube channel to access it. This opens it up to a much wider group of artists and entertainers who may not be active on the platform but still want to protect their identity.

Once enrolled, the system scans uploaded videos for visual matches. However, as Digital Music News points out, “detection does not guarantee removal.” YouTube has made it clear that some content will remain, particularly where it falls under parody or satire.

Beyond detection, YouTube is also supporting wider efforts to regulate AI misuse. TechCrunch references how the company has backed proposed legislation such as the NO FAKES Act in the United States, which aims to control how a person’s voice and likeness can be recreated using AI without permission.

For now, the company has noted that removals linked to the tool remain “very small,” suggesting it is still early in its rollout. However, this latest expansion reinforces the broader efforts towards protection and fair use of AI in media. Streaming services are already rolling out their own AI detection tools, tagging AI-generated tracks, and adding protections to stop fake releases appearing on artist profiles. Together, these changes point to a more controlled approach to AI content, where platforms are putting systems in place to manage how it is identified, distributed, and monetised.


Make sure your music is protected and working for you. Distribute your tracks with RouteNote and stay in control across all leading streaming platforms.