The Oversight Board, Meta's semi-independent policy council, is shifting its focus to the handling of explicit, AI-generated images on the company's social platforms. Two separate investigations have been announced concerning how Instagram in India and Facebook in the U.S. managed AI-generated images of public figures after Meta's systems failed to adequately detect and respond to the explicit content.
Following these incidents, both platforms have removed the media. The Oversight Board intervenes in cases regarding Meta's moderation decisions. Users are required to first appeal to Meta regarding a moderation action before approaching the Oversight Board.
Regarding the initial case, the board explained that a user flagged an AI-generated nude image of a public figure from India as pornography on Instagram. This image was shared by an account dedicated to posting AI-generated images of Indian women, with the majority of its audience based in India.
Meta initially failed to remove the image despite the report, and the report ticket was automatically closed after 48 hours without further review by the company. Subsequent appeals by the original complainant also resulted in automatic closures without Meta's oversight. Consequently, despite two reports, the explicit AI-generated image remained on Instagram.
Only after the user appealed to the Oversight Board did the company take action to remove the objectionable content, citing a breach of its community standards on bullying and harassment.
The second case pertains to Facebook, where a user shared an explicit AI-generated image resembling a U.S. public figure within a group dedicated to AI creations. The social network promptly removed the image since it had been previously posted by another user, and Meta had categorized it under 'derogatory sexualized photoshop or drawings' in its Media Matching Service Bank.
In recent years, some generative AI tools have expanded to enable users to generate pornographic content, although not all tools offer this capability. Deepfakes have emerged as a concern, particularly in regions like India, where data suggests that women are more frequently targeted in deepfaked videos. While India has contemplated introducing specific regulations addressing deep fakes, no definitive measures have been enacted yet.
Although the country has legal provisions for reporting online gender-based violence, experts observe that the process can be cumbersome, and victims often receive inadequate support. Currently, only a limited number of laws worldwide specifically address the creation and dissemination of pornographic content generated using AI tools. Some U.S. states have enacted legislation targeting deepfakes. Recently, the United Kingdom passed a law criminalizing the production of sexually explicit imagery generated with AI technology.
However, the Oversight Board has invited public feedback, with a deadline of April 30, regarding issues related to harms caused by deepfake porn, contextual insights into the spread of such content in regions like the U.S. and India, and potential shortcomings of Meta's approach in detecting AI-generated explicit imagery. Following an investigation into the cases and public input, the board will publish its decisions on the website in the coming weeks.
These cases underscore the ongoing challenges faced by major platforms in adapting older moderation processes to the rapid proliferation of content facilitated by AI-powered tools. Companies like Meta are exploring AI-driven solutions for content generation, although there are ongoing efforts to enhance detection mechanisms for such imagery.
However, despite these efforts, perpetrators continue to find ways to circumvent detection systems and share problematic content on social media platforms.