VoiceFreedom

Meta's AI Porn Dilemma: Oversight Board Demands Clearer Rules

Synopsis: Meta's Oversight Board has called for clearer rules on AI-generated pornography involving real people. The board reviewed cases involving fake explicit images of public figures from India and the US posted on Facebook and Instagram. Meta has agreed to review the recommendations.
Thursday, August 1, 2024
Source : ContentFactory

Meta's Oversight Board has issued a stern warning to the social media giant, urging it to revamp its policies regarding AI-generated pornographic content. The board, which operates independently despite being funded by Meta, made this call after examining two cases involving artificially created explicit images of well-known women from India and the United States.

The board's investigation revealed that Meta's current rules are inadequate in addressing the growing threat of AI-generated pornography. In both cases reviewed, the board determined that the images violated Meta's existing policy against derogatory sexualized photoshop, which the company categorizes as a form of bullying and harassment. However, the board emphasized that this rule is not comprehensive enough to cover the wide range of AI-generated content now possible.

One of the most concerning aspects highlighted by the board was Meta's inconsistent handling of these cases. In the instance involving the Indian public figure, Meta failed to review a user's report within the standard 48-hour window, resulting in the case being automatically closed without action. Even after an appeal, the company initially refused to act, only reversing its decision after the Oversight Board became involved. In contrast, Meta's systems automatically removed the image of the American celebrity, demonstrating a lack of uniform application of its policies.

The board strongly criticized Meta's approach to managing its database of prohibited images. According to the report, Meta admitted to relying primarily on media coverage to determine which images to add to its database for automatic removal. The board described this practice as worrying, pointing out that many victims of deepfake pornography are not public figures and may not receive media attention, leaving them vulnerable to ongoing exploitation.

In response to these findings, the Oversight Board has recommended that Meta update its rules to clarify their scope. They specifically suggested that the use of the term photoshop is too narrow and that the prohibition should encompass a broader range of editing techniques, including the use of generative AI. This recommendation reflects the rapidly evolving landscape of digital image manipulation and the need for policies to keep pace with technological advancements.

The board emphasized the severity of the harm caused by AI-generated pornography, stating that content removal is the only effective way to protect those impacted. They urged Meta to take a more proactive and comprehensive approach to identifying and removing such content, rather than relying on victims to continually search for and report instances of their non-consensual depictions.

Meta has acknowledged the board's recommendations and has committed to reviewing them. The company has promised to provide an update on any changes adopted as a result of this review. This situation highlights the ongoing challenges faced by social media platforms in balancing freedom of expression with the need to protect users from harmful content, particularly as AI technology continues to advance and create new forms of potential abuse.