ConnectHub

Tech Giants Under Fire: Urgent Call to Combat Non-Consensual Explicit Imagery

Synopsis: A recent letter from US senators criticizes tech giants like X, Google, and Discord for not participating in programs designed to remove non-consensual explicit images. These voluntary programs, including the National Center for Missing and Exploited Children’s “Take it Down” and the Revenge Porn Helpline’s “StopNCII,” are already used by companies like Meta and TikTok. As AI advances facilitate the creation of such harmful content, the letter demands increased corporate involvement to better protect victims.
Sunday, August 11, 2024
Deepfake
Source : ContentFactory

A recent letter addressed to major tech companies, including X, Google’s parent company Alphabet, Amazon, and others, highlights a critical gap in efforts to combat non-consensual explicit imagery online. Authored by Senators Jeanne Shaheen and Rick Scott, alongside ten other senators, the letter criticizes these tech giants for their inadequate engagement with two crucial programs designed to help remove non-consensual explicit content from the internet. This intervention comes as artificial intelligence technologies exacerbate the spread and creation of such harmful content.

The letter, which was shared exclusively with CNN, urges these tech firms to join the National Center for Missing and Exploited Children’s “Take it Down” program and the Revenge Porn Helpline’s “StopNCII” initiative. These programs, which already include participation from companies like Meta, Snap, TikTok, and PornHub, allow users to create unique numerical codes for explicit images they want removed. Participating platforms can then use these codes to locate and delete the unwanted content more efficiently. By joining these programs, companies would streamline the removal process for users, who otherwise must individually contact each platform.

In addition to urging participation in these programs, the letter underscores the life-altering consequences non-consensual intimate imagery can have on victims. Such content, often referred to as revenge porn, can significantly disrupt individuals' lives, careers, and personal relationships. The letter stresses that by increasing their involvement in these initiatives, tech companies can take actionable steps to mitigate these impacts.

Despite most of the companies named in the letter having policies against the creation or sharing of non-consensual explicit imagery, and some offering their own removal mechanisms, the letter points out that the voluntary programs provide a more efficient approach. For instance, Google has recently made efforts to ensure such content does not appear prominently in search results, but joining these programs would further consolidate efforts across multiple platforms, simplifying the process for victims.

The letter also highlights the pressing nature of this issue, as AI technologies have made it increasingly easy to create and disseminate non-consensual deepfake imagery. In the past year, individuals ranging from celebrities to high school students have been targeted with AI-generated pornographic content. Although nine US states have laws against non-consensual deepfake images, there is no federal legislation, leaving many victims without adequate legal recourse.

The bipartisan support for addressing this issue is notable. At a recent Capitol Hill hearing, affected teens and parents testified about the severe impacts of AI-generated pornography. In response, Senator Ted Cruz introduced a bill, supported by Senator Amy Klobuchar and others, which aims to criminalize the publication of such images and mandate their removal by social media platforms upon notice from victims. This legislative push underscores the growing urgency to address non-consensual explicit content comprehensively and effectively.