Meta isn't the only company grappling with the rise of AI-generated content and how it's affecting its platform. YouTube also quietly rolled out a policy change in June that allows people to request the removal of AI-generated or other synthetic content that simulates their face or voice. The change allows people to request the removal of this type of AI content through YouTube's privacy request process. It's an expansion of its previous policy announced approach for responsible AI agenda first introduced in November.
Rather than requesting that content be removed because it is misleading, such as a deepfake, YouTube wants affected parties to directly request the removal of the content as a violation of privacy. According to the recently updated Help Documentation On this topic, first party claims are needed outside of a handful of exceptions, such as when the affected person is a minor, does not have access to a computer, is deceased, or other such exceptions.
However, simply filing a removal request does not necessarily mean that the content will be removed. YouTube warns that it will make its own judgment on the complaint based on several factors.
For example, it may consider whether the content is being released as synthetic or created with AI, whether it uniquely identifies an individual, and whether the content could be considered parody, satire, or otherwise of value and in the interest of the public. The company also notes that it may consider whether the AI content features a public figure or other well-known person, and whether it depicts them engaging in “sensitive behavior” such as criminal activity, violence, or endorsing a product or political candidate. The latter is particularly concerning in an election year, when AI-generated endorsements could potentially sway votes.
YouTube says it also gives the content uploader 48 hours to respond to the complaint. If the content is removed before that time is up, the complaint will be closed; otherwise, YouTube will initiate a review. The company also warns users that removal means removing the video from the site entirely and, if applicable, removing the person’s name and personal information from the video’s title, description and tags. Users can also blur the faces of people in their videos, but they can’t simply make the video private to comply with the removal request, as the video can be made public again at any time.
However, the company has not widely advertised the policy change In March it introduced a tool in Creator Studio, which allowed creators to indicate when realistic-looking content was created using custom or synthetic media, including generative AI. It is also more recent started testing a feature which allows users to add Crowdsourced notes that provide additional context in videos, such as whether it is a parody or misleading in some way.
YouTube isn't against the use of AI, as it has experimented with generative AI itself, including a comment summarizer and a conversation tool for asking questions about a video or getting recommendations. However, the company has warned before that simply labeling AI content does not automatically protect it from removal, as the content must still comply with YouTube's Community Guidelines.
In the event of complaints about the privacy of AI content, YouTube will not punish the original creator of the content.
“For creators, if you receive a notice of a privacy complaint, please note that privacy violations are separate from Community Guidelines warnings and receiving a privacy complaint does not automatically result in a warning,” a company representative said last month. shared on the YouTube Community site, where the company notifies creators directly about new policies and features.
In other words, YouTube's Privacy Policy are different than his Community Guideand some content can be removed from YouTube as a result of a privacy request, even if it doesn’t violate the Community Guidelines. While the company doesn’t impose a penalty, such as an upload restriction, when a creator’s video is removed following a privacy complaint, YouTube tells us it may take action against accounts with repeat violations.
Updated 7/1/24, 4:17 p.m. ET with more information about the actions YouTube can take when privacy violations occur.