In an effort to crack down on content that 'realistically simulates' the deaths of youngsters who have passed away or who have been the victims of violent or deadly incidents, YouTube is amending its policy regarding harassment and cyberbullying. The platform, which is owned by Google, claims that as of January 16, it would start blocking such content.
The regulation change coincides with the use of AI by some real crime video providers to imitate the likeness of missing or killed children. In these unsettling cases, AI is being used to give the young victims of well-publicized crimes a juvenile 'voice' to explain their demise.
According to The Washington Post, content creators have been using artificial intelligence (AI) to narrate several high-profile incidents in recent months, including the kidnapping and killing of British two-year-old James Bulger. Similar AI narratives have also been produced about Gabriel Fernández, an eight-year-old who was tortured and killed by his mother and her lover in California, and Madeleine McCann, a British three-year-old who vanished from a resort.
Content that breaks the new policies will be removed from YouTube, and users who are struck will not be allowed to publish videos, livestreams, or articles for a week. The user's channel will be permanently deleted from YouTube following three strikes.
The latest adjustments coincide with the introduction of new tools by YouTube to request the removal of deepfakes, as well as new guidelines about responsible disclosures for AI content, over two months ago. A modification mandates that users reveal instances in which they have produced fake or modified content that seems authentic. Users who do not properly disclose their use of AI risk having their video removed, being suspended from the YouTube Partner Program, or facing other penalties, the firm warned.
Furthermore, even though it was labelled, YouTube stated at the time that certain AI content might be taken down if it was used to depict 'realistic violence'.