San Francisco, CA

San Francisco sues AI deepfake pornography sites: 200 million visits fuel harmful trend of undressing women and girls – Times of India

Published

on


San Francisco‘s chief deputy city attorney, Yvonne Meré, has filed a lawsuit against 16 websites that use AI to create deepfake pornography by undressing women and girls in photos without their consent.
This unprecedented legal action aims to shut down these popular sites that have surfaced as a harmful trend among teenage boys, employing nudification apps to manipulate images of their female classmates.
According to the New York Times, The 16 targeted sites were visited 200 million times in the first six months of this year. The entities behind these websites are located in California, New Mexico, the United Kingdom, and Estonia. When reached for comments, representatives of the websites either were unavailable or did not respond.
One site promotes its services by asking, “Have someone to undress?” Another states, “Imagine wasting time taking her out on dates,” advocating that users utilize the website “to get her nudes.” Some sites offer free initial images but later charge for more, accepting cryptocurrency or credit card payments.
The deepfake technology used by these sites relies on AI models trained with real pornography and imagery depicting child abuse to generate authentic-looking nude photos from clothed images.
City Attorney David Chiu, the office’s top lawyer, emphasized the minimal repercussions for those behind the images. He noted the challenge in identifying the specific websites responsible once the images begin circulating, which makes it hard for the victims to pursue legal action successfully.
“The article is flying around our office, and we were like, ‘What can we do about this?'” Chiu recalled in an interview. “No one has tried to hold these companies accountable.”
Sara Eisenberg, head of the legal unit focusing on major social problems, highlighted that the issue cannot be solved merely by educating teenagers on safe technology use. Any photo can be manipulated without the subject’s consent, rendering traditional safeguards ineffective.
“You can be as internet-savvy and social media-savvy as you want, and you can teach your kids all the ways to protect themselves online, but none of that can protect them from somebody using these sites to do really awful, harmful things,” Eisenberg said.
The lawsuit is seeking an injunction to shut down the websites and permanently restrain them from creating deepfake pornography in the future. It also demands civil penalties and attorneys’ fees.
The suit argues that these sites violate state and federal revenge pornography laws, child pornography laws, and California’s Unfair Competition Law, which prohibits unlawful and unfair business practices.
Meré took action after reading about the damaging effects of deepfake images in a New York Times article. She immediately contacted Eisenberg, and together, they sought support from Chiu to craft the lawsuit.
“The technology has been used to create deepfake nudes of everyone from Taylor Swift to ordinary middle-school girls with few apparent repercussions,” Chiu said. “The images are sometimes used to extort victims for money or humiliate and harass them.”
Experts warn that deepfake pornography poses severe risks to victims, impacting their mental health, reputations, college, and job prospects. The problem is exacerbated by the difficulty in tracing the origin of the images, making legal recourse challenging.
“This strategy could be viewed as a Whac-a-Mole approach since more sites could crop up,” Chiu acknowledged. However, the suit proposes to add more sites as they are discovered, aiming for broader enforcement as the issue evolves.
San Francisco, being a hub for the artificial intelligence industry with major companies like OpenAI and Anthropic based there, is a fitting venue for this legal challenge. Chiu acknowledged the positive contributions of the AI industry but pointed out that deepfake pornography represents a “dark side” that must be addressed.
“Keeping pace with the rapidly changing industry as a government lawyer is daunting,” Chiu said. “But that doesn’t mean we shouldn’t try.”
The lawsuit marks a significant effort to combat the misuse of AI technology in creating harmful content and holding accountable those who perpetuate these destructive practices.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Exit mobile version