Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators. But just as important, these opportunities must be balanced with the responsibility to protect the YouTube community. All content uploaded to YouTube is subject to Community Guidelines—regardless of how it’s generated—but AI will introduce new risks and will require new approaches.
YouTube is in the early stages of this work and will continue to evolve the approach as the platform grows further. Here’s a look at what YouTube will roll out:
1. Disclosure policy
YouTube has implemented a policy requiring creators to clearly disclose when a video has been altered or synthetically created using AI tools. This helps viewers distinguish between genuine content and potentially misleading videos. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do. This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials.
2. Removal request via privacy policy
YouTube will make it possible to facilitate removal requests of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process. Not all content will be removed from YouTube, and the platform will consider various factors when evaluating these requests. This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.
3. Deploying AI to power content moderation
YouTube has always used a combination of people and machine learning technologies to enforce our Community Guidelines, with more than 20,000 reviewers across Google operating around the world. In our systems, AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is continuously increasing both the speed and accuracy of its content moderation systems.
One clear area of impact has been in identifying novel forms of abuse. When new threats emerge, our systems have relatively little context to understand and identify them at scale. But generative AI helps Google rapidly expand the set of information AI classifiers are trained on, meaning they are able to identify and catch this content much more quickly. Improved speed and accuracy of systems also allows them to reduce the amount of harmful content human reviewers are exposed to.
Aside from innovative product features and policies, Google and YouTube will continue to make deep investments in the area of raising awareness and education to promote critical thinking skills and help people spot misinformation. These initiatives include fact-checking workshops, content creator collaborations (“Mag-ingat” song with Ben&Ben), and media literacy organization partnerships (#YOUThink publication with CANVAS), among many others.