

YouTube has officially introduced a new artificial intelligence-powered likeness detection tool to help protect creators from AI-generated deepfakes. The feature is designed to identify videos where a person’s face or voice has been synthetically recreated without their consent.According to YouTube, eligible creators can now access the tool by completing a detailed onboarding process that includes submitting a government-approved ID and a video selfie to verify identity. Once verified, creators will be able to monitor and review potential deepfake videos through a dedicated dashboard available under the Content ID section.Initially, this feature is available to members of the YouTube Partner Program, with plans to gradually expand it to more users worldwide. The onboarding process is intentionally rigorous to prevent fake registrations or impersonations. All data submitted during this process will be securely stored on Google servers.
Once access is granted, creators can view flagged videos, categorized by priority, and choose to either request removal or archive them. YouTube will then review the reported videos and take action if they violate its policies.Since the feature is in its early phase, some users may see their own videos appear in the scan results as the AI system continues to learn and improve. Creators can disable the tool anytime through the dashboard’s Manage Tool option, after which YouTube will stop scanning their likeness data within 24 hours.This move marks a major step by YouTube to strengthen creator protection amid growing concerns over AI-driven misinformation and identity misuse.













Comments (0)
No comments yet
Be the first to comment!