Youtube To Introduce New AI Tools for Detecting Deepfakes in Music and Faces.

Youtube To Introduce New AI Tools for Detecting Deepfakes in Music and Faces.

YouTube is enhancing its efforts to combat the rising threat of AI-generated deepfake content with two new detection tools. The Google-owned platform recently announced plans to develop AI-powered features aimed at identifying both AI-generated music and faces, helping protect creators and public figures from unauthorized use of their likeness.

The first tool, referred to as “synthetic-singing identification technology,” is designed to detect AI-generated songs that mimic well-known artists. This technology will integrate with YouTube’s existing Content ID system, which already helps copyright owners manage and remove infringing material. This new feature is expected to help high-profile musicians like Drake, Taylor Swift, and Billie Eilish combat the rise of AI impersonators. However, it remains unclear if the tool will be as effective for lesser-known artists whose voices may not be as widely recognized.

The second tool YouTube is developing focuses on detecting AI-generated deepfakes of faces, a growing concern for public figures, including influencers, actors, and athletes. While this tool will allow those individuals to track down AI-generated videos impersonating them, YouTube has not confirmed whether it will proactively use the tool to remove deepfakes of non-famous individuals or to tackle the increasing number of AI-generated scam videos.

YouTube’s Community Guidelines already prohibit deceptive content, including deepfakes used for scams, but the responsibility for reporting these videos largely falls on users. This has led to concerns, especially as deepfake videos have spiked 550% since 2021, according to recent studies. Most of these videos involve the unauthorized use of women’s likenesses in inappropriate content.

Despite the challenges, YouTube’s goal remains to strike a balance between encouraging AI as a creative tool and preventing its misuse. The platform is investing in these AI detection technologies to protect creators and public figures as AI-generated content continues to rise.

The release dates for these new tools have yet to be announced.

Scroll to Top