Skip to main content

🔥 BLACK FRIDAY DEALS: Save 30% on our subscription plans for a limited time only! 🔥 Save now

All blog posts

YouTube Introduces New AI Likeness and Deepfakes Protection Measures

Share this article on

YouTube is introducing new AI likeness and deepfake protection measures under its current privacy guidelines. These updates will allow “first parties” to request the removal of unauthorized lookalike and soundalike content.

New measure to combat unauthorized AI-generated content

The new policy has been addressed in a broader privacy guideline update, which states that people can now ask for takedowns of AI-generated content that mimics their appearance or voice.

In order to qualify for removal, the content should depict a realistic altered or synthetic version of your likeness,” the new guidelines read. Such a removal request will not be granted automatically. Once a request has been submitted, the uploader of the supposedly violating content is given 48 hours to remove it from the platform. If the uploader doesn’t take action in the given time frame, YouTube will initiate a review.

The online video-sharing platform will consider a number of factors when determining whether to remove suspected content, including:

  • Whether the content is altered or synthetic;

  • Whether the content is disclosed to viewers as altered or synthetic;

  • Whether the person can be uniquely identified;

  • Whether the content is realistic;

  • Whether the content contains parody, satire, or other public interest value; and

  • Whether the content features a public figure or well-known individual engaging in sensitive behavior such as criminal activity, violence, or endorsing a product or political candidate.

YouTube has also noted that for a removal request to be considered, the claim must come from the “first party,” meaning that only the person whose privacy is being violated can file a request. The platform has notably highlighted that it “will not accept privacy complaints filed on behalf of” employees or companies. The only exceptions to the “first party” rule are:

  • When the claim is being made by a parent or guardian;

  • When the person in question doesn’t have access to a computer;

  • When the claim is being made by a legal representative of the person in question; and

  • When a close relative makes a request on behalf of a deceased person.

It’s important to note that the removal of content under this policy doesn’t count as a “strike” against the uploader, which could result in serious repercussions for the individual, including a potential ban on the platform, withdrawal of ad revenue, and other penalties. This is because the new regulations fall under YouTube’s privacy guidelines and not its community guidelines. Only community guidelines violations can lead to strikes.

The new policy, which has been introduced without much fuss, is part of YouTube’s effort to regulate AI-generated content, including deepfakes.

Following the viral success of several musical deep fakes last year (including the infamous “fake Drake feat. The Weeknd” track), YouTube announced it was developing a system for partners to request the removal of content that “mimics an artist’s unique singing or rapping voice.”

Additionally, it now demands any AI-generated content on the platform to be labeled as such and has also developed new tools allowing uploaders to add these labels to their content. “Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties,” YouTube said.

However, even with the labels, AI-generated content can still be removed from YouTube if it violates its community guidelines. “For example, a synthetically created video that shows realistic violence may still be removed if its goal is to shock or disgust viewers,” the platform’s statement reads.

YouTube is one of many tech companies working to address the controversies surrounding AI-generated content and deepfakes. TikTok and Meta are also implementing new measures to protect viewers and unique creators from the harm of such technologies.

Meanwhile, the problem is also being tackled at the legislative level. The US Congress is considering several bills, including the “No AI FRAUD” Act and the “NO FAKES” Act.

If passed, these bills would provide individuals with intellectual property rights over their likeness and voice, enabling them to take legal action against creators of unauthorized deepfakes. Moreover, the proposed laws would protect artists from having their work stolen and individuals from being exploited by sexually explicit deepfakes.

BLACK FRIDAY DEALS

Save 30% on ALL our subscription plans! For a limited time only! 

Save now
Share:
Always stay up-to-date

All You Need. All in One Place.

Get tips on How to Succeed as an Artist, receive Music Distribution Discounts, and get the latest iMusician news sent straight to your inbox! Everything you need to grow your music career.