Radical change in YouTube policies in the era of "deepfake". Content creators' obligations do not apply to children's animations
YouTube has updated its publishing regulations in line with the deepfake era.
Starting from March 19th, anyone uploading videos to the platform must disclose, if applicable, the use of synthetic means, including generative artificial intelligence, so that viewers know that what they see is not real.
Related
- "I know how to get out of the water if I fall". Students in a school in Sweden learn how to survive in challenging conditions
- Young people of Generation Z don't answer their parents' calls. Why do they prefer to communicate through messages?
- Romanian youth make their voices heard in front of political leaders at the European Forum in Wachau
- Energy drinks, a danger to children's health. The effects are getting worse. "He often complained of pain"
- A study conducted by the Save the Children Organization: An increasing number of parents are unaware of their children's activity on the Internet
YouTube stated that the change applies to realistically altered media, such as "making a real building appear to be on fire" or changing "the face of one individual with that of another".
The new policy indicates that YouTube is taking steps that could contribute to limiting the spread of misinformation generated by artificial intelligence, especially in the context of the upcoming US presidential elections. However, surprisingly, the rule makes an exception by excluding AI-generated animations intended for children, which thus do not fall under the new rules for disclosing synthetic content, as reported by Wired.
This means that malicious individuals using AI-generated content can produce and disseminate videos intended for children without being required to disclose their methods. Worried parents of children's video materials will have to identify AI-generated cartoons themselves.
The new policy of YouTube also indicates that creators do not need to signal the use of artificial intelligence for "minor" modifications that are "primarily aesthetic", such as beauty filters or video and audio "clean-up". Additionally, the use of AI to "generate or enhance" a scenario or subtitles is permitted without the need to disclose AI techniques.
Content creators must tag the labels about AI contribution to the respective video in the new "Creator Studio" interface, and if they fail to do so, YouTube will automatically add such a label, with the possibility of penalizing the respective content creators, according to the company's new rules.
YouTube's parent company, Google, recently stated that it is modifying its search algorithms to downgrade the recent avalanche of AI-generated clickbait, possibly due to tools like ChatGPT. Video generation technology is less mature, but improving rapidly, the source indicates.
YouTube is a giant in children's entertainment, surpassing other competitors. In the past, the platform has struggled to moderate the large amount of children's content. It has been criticized for hosting content that seems suitable or attractive to children, but which contains inappropriate themes upon closer inspection.
There are recent reports of an increase in the number of YouTube channels aimed at children, which seem to use AI video generation tools to produce low-quality videos, with generic 3D animations and odd iterations of popular nursery rhymes.
The exemption for animations in YouTube's new policy could mean that parents cannot easily remove such videos from search results or prevent YouTube's recommendation algorithm from automatically playing AI-generated cartoons after initially allowing videos from channels known for children.
Some problematic AI-generated content intended for children must be reported according to the new rules. In 2023, the BBC investigated a wave of videos intended for older children that used AI tools to promote pseudoscience and conspiracy theories, including climate change denialism.
This new policy would take action against this type of video.
"We ask content creators for children to disclose significantly modified or synthetically generated content when it appears realistic", said YouTube spokesperson Elena Hernandez. "We do not require disclosure of content that is clearly unrealistic and does not mislead the viewer into believing it is real".
Parents are put in a difficult position
The YouTube Kids app for children is supervised by parents using a combination of automated filters, human reviews and user feedback to find well-made children's content. However, many parents simply use the main YouTube app to find content for their children, relying on video titles, lists and thumbnails to determine what is suitable for them.
So far, most apparently AI-generated children's content found by WIRED on YouTube has been poorly made, similar to conventional children's cartoons. They have ugly images, incoherent plots and zero educational value.
AI tools facilitate the production of such content and in greater volume. Some of the channels WIRED found upload long videos, some well over an hour long. Requesting labels on AI-generated children's content could help parents filter out cartoons that could be posted with minimal checking - or even without human verification.
More about Pro TV
- Why do we need vitamin D even during the summer months? Doctors' explanations
- Foods for the brain. What food and habits improve your memory during the exam period
- The diet that protects you against a type of cancer increasingly common among young people. Dr. Mihaela Bilic: "You are safe"
- More and more young people want to become teachers, attracted by the increased salaries and the more relaxed schedule