YouTube has begun introducing a new AI detection tool designed to identify and flag videos using creators’ likenesses without permission. The platform developed the feature in partnership with Creative Artists Agency (CAA) to help protect artists and public figures from deepfakes.
Currently, the tool is being tested with a select group of creators and will expand to all members of the YouTube Partner Program in the coming months. It functions similarly to YouTube’s Content ID system, which identifies copyrighted material. Creators will see flagged videos in YouTube Studio’s Content Detection tab and can request removal if their likeness is misused.
To access the tool, creators must verify their identity with a photo ID and short video clip. YouTube says the goal is to help users safeguard their reputations and prevent audiences from being misled. The company also noted that no detected matches simply means a creator’s likeness hasn’t been used without authorization.
The move follows YouTube’s previous updates allowing removal requests for videos imitating a person’s voice or image. It’s part of broader efforts to address the ethical challenges posed by generative AI, even as YouTube continues to develop new AI-driven music and content tools.












