YouTube Rolls Out AI Likeness Detection to Combat Deepfakes, Offering New Protections for MENA’s Public Figures

4 Min Read

YouTube is expanding its likeness detection technology to the entertainment industry, providing a new tool for public figures to combat the unauthorized use of their image in AI-generated content and deepfakes. The company announced the move on Tuesday, broadening access to a system designed to give individuals more control over their digital identity.

Quick Facts

  • New tool detects AI-generated faces.
  • Expands protection to celebrities and agencies.
  • Allows requests for video removal.

How the AI Watchdog Works

The technology operates much like YouTube’s established Content ID system, which flags copyright-protected material in user uploads. Instead of scanning for audio or video clips, the new feature scans for simulated faces.

When the system detects a visual match of an enrolled public figure, their representatives can review the content. From there, they have three options: request the video’s removal for violating YouTube’s privacy policy, file a formal copyright removal request, or take no action.

YouTube has clarified that not all detected content will be removed. The platform’s policies still permit parody and satire, creating a distinction between malicious deepfakes and creative commentary.

Entertainment Industry Backing

The expansion brings major players from the entertainment world into the fold, including talent agencies like CAA, UTA, and WME, along with Untitled Management. These agencies provided feedback during the tool’s development.

Crucially, an entertainer does not need to have a personal YouTube channel to be protected by the system. Their management or agency can enroll their likeness, allowing the tool to scan the platform on their behalf. This addresses a common issue where celebrities find their AI-generated likeness used in scam advertisements without their knowledge or consent.

The program was initially piloted with a small group of YouTube creators last year before being offered to politicians, government officials, and journalists earlier this spring.

Implications for the MENA Region

For the rapidly growing creator and entertainment ecosystem in the Middle East and North Africa, this tool offers a much-needed layer of security. As influencers, actors, and public figures across the region build their brands, their digital likenesses become valuable assets vulnerable to misuse.

From Dubai’s social media stars to Cairo’s film actors, the risk of being featured in AI-generated scam ads or defamatory content is a rising concern. YouTube’s new system provides a direct mechanism for MENA-based talent and their management to police the platform and protect their reputation and commercial interests.

Future Developments and Regulation

YouTube confirmed that the technology will eventually support audio detection, adding another dimension to its protective capabilities.

Beyond its own platform, the company is advocating for federal-level protections in the United States, supporting the NO FAKES Act. This proposed legislation aims to regulate the use of AI to create unauthorized digital replicas of an individual’s voice or appearance. While the company has not released specific data on removals, it noted in March that the numbers processed by the tool were still “very small.”

About YouTube

YouTube is an American online video sharing and social media platform headquartered in San Bruno, California. It was launched on February 14, 2005, by Steve Chen, Chad Hurley, and Jawed Karim. It is currently owned by Google and is the second most visited website, after Google Search.

Source: TechCrunch

Share This Article