Connect with us

Business

The Race to Prevent ‘the Worst Case Scenario for Machine Learning’

Published

on

The Race to Prevent ‘the Worst Case Scenario for Machine Learning’

Dave Willner has had a front-row seat to the evolution of the worst things on the internet.

He started working at Facebook in 2008, back when social media companies were making up their rules as they went along. As the company’s head of content policy, it was Mr. Willner who wrote Facebook’s first official community standards more than a decade ago, turning what he has said was an informal one-page list that mostly boiled down to a ban on “Hitler and naked people” into what is now a voluminous catalog of slurs, crimes and other grotesqueries that are banned across all of Meta’s platforms.

So last year, when the San Francisco artificial intelligence lab OpenAI was preparing to launch Dall-E, a tool that allows anyone to instantly create an image by describing it in a few words, the company tapped Mr. Willner to be its head of trust and safety. Initially, that meant sifting through all of the images and prompts that Dall-E’s filters flagged as potential violations — and figuring out ways to prevent would-be violators from succeeding.

It didn’t take long in the job before Mr. Willner found himself considering a familiar threat.

Just as child predators had for years used Facebook and other major tech platforms to disseminate pictures of child sexual abuse, they were now attempting to use Dall-E to create entirely new ones. “I am not surprised that it was a thing that people would attempt to do,” Mr. Willner said. “But to be very clear, neither were the folks at OpenAI.”

Advertisement

For all of the recent talk of the hypothetical existential risks of generative A.I., experts say it is this immediate threat — child predators using new A.I. tools already — that deserves the industry’s undivided attention.

In a newly published paper by the Stanford Internet Observatory and Thorn, a nonprofit that fights the spread of child sexual abuse online, researchers found that, since last August, there has been a small but meaningful uptick in the amount of photorealistic A.I.-generated child sexual abuse material circulating on the dark web.

According to Thorn’s researchers, this has manifested for the most part in imagery that uses the likeness of real victims but visualizes them in new poses, being subjected to new and increasingly egregious forms of sexual violence. The majority of these images, the researchers found, have been generated not by Dall-E but by open-source tools that were developed and released with few protections in place.

In their paper, the researchers reported that less than 1 percent of child sexual abuse material found in a sample of known predatory communities appeared to be photorealistic A.I.-generated images. But given the breakneck pace of development of these generative A.I. tools, the researchers predict that number will only grow.

“Within a year, we’re going to be reaching very much a problem state in this area,” said David Thiel, the chief technologist of the Stanford Internet Observatory, who co-wrote the paper with Thorn’s director of data science, Dr. Rebecca Portnoff, and Thorn’s head of research, Melissa Stroebel. “This is absolutely the worst case scenario for machine learning that I can think of.”

Advertisement

Dr. Portnoff has been working on machine learning and child safety for more than a decade.

To her, the idea that a company like OpenAI is already thinking about this issue speaks to the fact that this field is at least on a faster learning curve than the social media giants were in their earliest days.

“The posture is different today,” said Dr. Portnoff.

Still, she said, “If I could rewind the clock, it would be a year ago.”

In 2003, Congress passed a law banning “computer-generated child pornography” — a rare instance of congressional future-proofing. But at the time, creating such images was both prohibitively expensive and technically complex.

Advertisement

The cost and complexity of creating these images has been steadily declining, but changed last August with the public debut of Stable Diffusion, a free, open-source text-to-image generator developed by Stability AI, a machine learning company based in London.

In its earliest iteration, Stable Diffusion placed few limits on the kind of images its model could produce, including ones containing nudity. “We trust people, and we trust the community,” the company’s chief executive, Emad Mostaque, told The New York Times last fall.

In a statement, Motez Bishara, the director of communications for Stability AI, said that the company prohibited misuse of its technology for “illegal or immoral” purposes, including the creation of child sexual abuse material. “We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes,” Mr. Bishara said.

Because the model is open-source, developers can download and modify the code on their own computers and use it to generate, among other things, realistic adult pornography. In their paper, the researchers at Thorn and the Stanford Internet Observatory found that predators have tweaked those models so that they are capable of creating sexually explicit images of children, too. The researchers demonstrate a sanitized version of this in the report, by modifying one A.I.-generated image of a woman until it looks like an image of Audrey Hepburn as a child.

​​Stability AI has since released filters that try to block what the company calls “unsafe and inappropriate content.” And newer versions of the technology were built using data sets that exclude content deemed “not safe for work.” But, according to Mr. Thiel, people are still using the older model to produce imagery that the newer one prohibits.

Advertisement

Unlike Stable Diffusion, Dall-E is not open-source and is only accessible through OpenAI’s own interface. The model was also developed with many more safeguards in place to prohibit the creation of even legal nude imagery of adults. “The models themselves have a tendency to refuse to have sexual conversations with you,” Mr. Willner said. “We do that mostly out of prudence around some of these darker sexual topics.”

The company also implemented guardrails early on to prevent people from using certain words or phrases in their Dall-E prompts. But Mr. Willner said predators still try to game the system by using what researchers call “visual synonyms” — creative terms to evade guardrails while describing the images they want to produce.

“If you remove the model’s knowledge of what blood looks like, it still knows what water looks like, and it knows what the color red is,” Mr. Willner said. “That problem also exists for sexual content.”

Thorn has a tool called Safer, which scans images for child abuse and helps companies report them to the National Center for Missing and Exploited Children, which runs a federally designated clearinghouse of suspected child sexual abuse material. OpenAI uses Safer to scan content that people upload to Dall-E’s editing tool. That’s useful for catching real images of children, but Mr. Willner said that even the most sophisticated automated tools could struggle to accurately identify A.I.-generated imagery.

That is an emerging concern among child safety experts: That A.I. will not just be used to create new images of real children but also to make explicit imagery of children who do not exist.

Advertisement

That content is illegal on its own and will need to be reported. But this possibility has also led to concerns that the federal clearinghouse may become further inundated with fake imagery that would complicate efforts to identify real victims. Last year alone, the center’s CyberTipline received roughly 32 million reports.

“If we start receiving reports, will we be able to know? Will they be tagged or be able to be differentiated from images of real children?” said Yiota Souras, the general counsel of the National Center for Missing and Exploited Children.

At least some of those answers will need to come not just from A.I. companies, like OpenAI and Stability AI, but from companies that run messaging apps or social media platforms, like Meta, which is the top reporter to the CyberTipline.

Last year, more than 27 million tips came from Facebook, WhatsApp and Instagram alone. Already, tech companies use a classification system, developed by an industry alliance called the Tech Coalition, to categorize suspected child sexual abuse material by the victim’s apparent age and the nature of the acts depicted. In their paper, the Thorn and Stanford researchers argue that these classifications should be broadened to also reflect whether an image was computer-generated.

In a statement to The New York Times, Meta’s global head of safety, Antigone Davis, said, “We’re working to be purposeful and evidence-based in our approach to A.I.-generated content, like understanding when the inclusion of identifying information would be most beneficial and how that information should be conveyed.” Ms. Davis said the company would be working with the National Center for Missing and Exploited Children to determine the best way forward.

Advertisement

Beyond the responsibilities of platforms, researchers argue that there is more that A.I. companies themselves can be doing. Specifically, they could train their models to not create images of child nudity and to clearly identify images as generated by artificial intelligence as they make their way around the internet. This would mean baking a watermark into those images that is more difficult to remove than the ones either Stability AI or OpenAI have already implemented.

As lawmakers look to regulate A.I., experts view mandating some form of watermarking or provenance tracing as key to fighting not only child sexual abuse material but also misinformation.

“You’re only as good as the lowest common denominator here, which is why you want a regulatory regime,” said Hany Farid, a professor of digital forensics at the University of California, Berkeley.

Professor Farid is responsible for developing PhotoDNA, a tool launched in 2009 by Microsoft, which many tech companies now use to automatically find and block known child sexual abuse imagery. Mr. Farid said tech giants were too slow to implement that technology after it was developed, enabling the scourge of child sexual abuse material to openly fester for years. He is currently working with a number of tech companies to create a new technical standard for tracing A.I.-generated imagery. Stability AI is among the companies planning to implement this standard.

Another open question is how the court system will treat cases brought against creators of A.I.-generated child sexual abuse material — and what liability A.I. companies will have. Though the law against “computer-generated child pornography” has been on the books for two decades, it’s never been tested in court. An earlier law that tried to ban what was then referred to as virtual child pornography was struck down by the Supreme Court in 2002 for infringing on speech.

Advertisement

Members of the European Commission, the White House and the U.S. Senate Judiciary Committee have been briefed on Stanford and Thorn’s findings. It is critical, Mr. Thiel said, that companies and lawmakers find answers to these questions before the technology advances even further to include things like full motion video. “We’ve got to get it before then,” Mr. Thiel said.

Julie Cordua, the chief executive of Thorn, said the researchers’ findings should be seen as a warning — and an opportunity. Unlike the social media giants who woke up to the ways their platforms were enabling child predators years too late, Ms. Cordua argues, there’s still time to prevent the problem of AI-generated child abuse from spiraling out of control.

“We know what these companies should be doing,” Ms. Cordua said. “We just need to do it.”

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Business

On TikTok, Users Thumb Their Noses at Looming Ban

Published

on

On TikTok, Users Thumb Their Noses at Looming Ban

Over the last week, the videos started appearing on TikTok from users across the United States.

They all made fun of the same thing: how the app’s ties to China made it a national security threat. Many implied that their TikTok accounts had each been assigned an agent of the Chinese government to spy on them through the app — and that the users would miss their personal spies.

“May we meet again in another life,” one user wrote in a video goodbye set to Whitney Houston’s cover of Dolly Parton’s “I Will Always Love You.” The video included an A.I.-generated image of a Chinese military officer.

The videos were just one way that some of TikTok’s 170 million monthly U.S. users were reacting as they prepared for the app to disappear from the country as soon as Sunday.

The Supreme Court is set to rule on a federal law that required TikTok’s Chinese owner, ByteDance, to sell the app by Jan. 19 or face a ban in the United States. U.S. officials have said China could use TikTok to harvest Americans’ private data and spread covert disinformation. TikTok, which has said a sale is impossible and challenged the law, is now awaiting the Supreme Court’s response.

Advertisement

The possibility that the justices will uphold the law has set off a palpable sense of grief and dark humor across the app. Some users have posted videos suggesting ways to circumvent a ban with technological workarounds. Others have downloaded another Chinese app, Xiaohongshu, also known as “Red Note,” to thumb their noses at the U.S. government’s concerns about TikTok’s ties to China.

The videos highlight the collision taking place online between the law, which Congress passed with wide support last year, and everyday users of TikTok, who are dismayed that the app may soon disappear.

“Much of my TikTok feed now is TikTokers ridiculing the U.S. government, TikTokers thanking their Chinese spy as a form of ridicule,” said Anupam Chander, a professor of law and technology at Georgetown University and an expert on the global regulation of new technologies. “TikTokers recognize that they are not likely to be manipulated by anyone. They are actually quite sophisticated about the information they’re receiving.”

TikTok declined to comment on the users’ references to its ties to China.

Some users are not willing to give up the app — or their supposed spies — so easily.

Advertisement

Hundreds of TikTok videos over the last week have cataloged how teenagers could keep using the app in the United States, according to a review by The New York Times. One of the most popular methods described is the use of a VPN, or a virtual private network, which can mask a user’s location and make it appear that the person is elsewhere.

“They can’t actually ban TikTok in the U.S. because VPNs are not banned,” Sasha Casey, a TikTok user, said in a recent video that was liked over 60,000 times. “Use a VPN. And send a picture to Congress while you do it, because that’s what I’ll be doing.”

While VPNs can make it appear that a phone, a laptop or another electronic device is in a remote location, it is not clear if the technology can circumvent the ban. A device’s real location is stored in many places, including in the app store that was used to download TikTok.

TikTok fans also seem to be behind the sudden surge in popularity for Xiaohongshu, the most downloaded free app on Tuesday and Wednesday in the U.S. Apple Store. Hundreds of millions of people in China use the app, which, like TikTok, features short videos and text-based posts. Xiaohongshu means “little red book” in Mandarin.

Mr. Chander anticipates that the Supreme Court will uphold the ban law this week, though he believes that TikTok has the winning case. He said the downloads of Red Note and the Chinese spy memes showed that many Americans did not agree with their government’s security concerns, particularly at the expense of free speech.

Advertisement

“When the United States shutters a massive free expression service, which our democratic allies have not shuttered, it will make us the censor and put us in the unusual position of silencing expression,” Mr. Chander said. “It will make Americans who use TikTok really distrustful of the U.S. government as carrying their best interests.”

Continue Reading

Business

Edison stock turns volatile as growing blame for wildfires lands on the power company

Published

on

Edison stock turns volatile as growing blame for wildfires lands on the power company

Southern California’s catastrophic fires have rocked the stock of Edison International, the parent company of Southern California Edison, as accusations and lawsuits about the utility’s potential role in starting the fires mount.

Shares of Edison International closed up 5% at $61.30 on Wednesday after plunging 23% this month, making it one of the worst performers on the Standard & Poor’s 500. The rebound came after Ladenburg Thalmann analysts upgraded their rating of the stock to neutral from sell, saying that their target price of $56.50 a share reflected worst-case outcomes associated with the current wildfires.

“At this time, it is too early to discern what the outcomes will be with respect to the impact of the fires on the California Wildfire Insurance Fund solvency and/or the future earnings of Edison International,” the analysts wrote, according to Barron’s. “An initial assessment of SCE’s role in the start of the fires will likely not occur until the summer of 2025 at the earliest.”

State lawmakers established the wildfire fund in the wake of wildfires several years ago after Wall Street investors lost confidence and ratings agencies threatened to downgrade California’s investor-owned utilities.

Advertisement

Market analyst Zacks downgraded Edison International stock from outperform to neutral after the fires started last week. Zacks predicted Edison’s operating revenue would increase during 2025 and 2026, while acknowledging that “the company has been incurring significant wildfire-related costs” and that “higher-than-expected decommissioning costs could materially impact the company’s operating results.”

RBC Capital Markets, another analyst, had a loftier view of Edison as recently as October when it called the utility “a high quality operator, with investor confidence around wildfire risk improving from best in class mitigation efforts.”

The fallout from the fires is an abrupt disruption for a company that had been surging in recent months. In its most recent quarterly report, the company posted a profit of $516 million, or $1.33 per share, compared with $155 million, or 40 cent per share, in the third quarter of last year.

“Our team has achieved remarkable success over the last several years managing unprecedented climate challenges, making our operations more resilient and positioning us strongly for the growth ahead,” President Pedro J. Pizarro said in the report.

Fire agencies are investigating whether downed Southern California Edison utility equipment played a role in igniting the 800-acre Hurst fire near Sylmar, company officials have acknowledged.

Advertisement

The company issued a report Friday saying that a downed conductor was discovered at a tower in the vicinity of the Hurst fire, but that it “does not know whether the damage observed occurred before or after the start of the fire.” The fire is nearly fully contained, according to the California Department of Forestry and Fire Protection.

SCE is also under scrutiny for possibly being involved in sparking the Eaton fire that has burned 14,000 acres and destroyed thousands of structures, wiping out whole swaths of Altadena, where at least 16 people died in the blaze.

On Tuesday the Newport Beach law firm of Bridgford, Gleason & Artinian filed a mass action complaint in Los Angeles Superior Court against SCE regarding the Eaton fire on behalf of victims including Jeremy Gursey, whose Altadena property was destroyed in the fire.

“Based upon our investigation, our discussions with various consultants, the public statements of SCE, and the video evidence of the fire’s origin, we believe that the Eaton Fire was ignited because of SCE’s failure to de-energize its overhead wires which traverse Eaton Canyon—despite a red flag PDS wind warning issued by the national weather service the day before the ignition of the fire,” lawyer Richard Bridgford said in a statement.

The firm said it has represented more than 10,000 California fire victims in past suits against Pacific Gas & Electric Co. and SCE. Bridgford told Yahoo Finance that his inbox is full of Southern California residents seeking to participate in the Eaton fire lawsuit and that he anticipates “there’ll be hundreds joining.”

Advertisement

The most extreme level of a red flag fire warning, a “particularly dangerous situation,” returned to parts of Los Angeles and Ventura counties Wednesday morning, heightening concerns about the potential for new fires.

“The danger has not yet passed,” Los Angeles Fire Department Chief Kristin Crowley said during a news conference Wednesday. “So please prioritize your safety.”

Continue Reading

Business

Albania Gives Jared Kushner Hotel Project a Nod as Trump Returns

Published

on

Albania Gives Jared Kushner Hotel Project a Nod as Trump Returns

The government of Albania has given preliminary approval to a plan proposed by Jared Kushner, Donald J. Trump’s son-in-law, to build a $1.4 billion luxury hotel complex on a small abandoned military base off the coast of Albania.

The project is one of several involving Mr. Trump and his extended family that directly involve foreign government entities that will be moving ahead even while Mr. Trump will be in charge of foreign policy related to these same nations.

The approval by Albania’s Strategic Investment Committee — which is led by Prime Minister Edi Rama — gives Mr. Kushner and his business partners the right to move ahead with accelerated negotiations to build the luxury resort on a 111-acre section of the 2.2-square-mile island of Sazan that will be connected by ferry to the mainland.

Mr. Kushner and the Albanian government did not respond Wednesday to requests for comment. But when previously asked about this project, both have said that the evaluation is not being influenced by Mr. Kushner’s ties to Mr. Trump or any effort to try to seek favors from the U.S. government.

“The fact that such a renowned American entrepreneur shows his interest on investing in Albania makes us very proud and happy,” a spokesman for Mr. Rama said last year in a statement to The New York Times when asked about the projects.

Advertisement

Mr. Kushner’s Affinity Partners, a private equity company backed with about $4.6 billion in money mostly from Saudi Arabia and other Middle East sovereign wealth funds, is pursuing the Albania project along with Asher Abehsera, a real-estate executive that Mr. Kushner has previously teamed up with to build projects in Brooklyn, N.Y.

The Albanian government, according to an official document recently posted online, will now work with their American partners to clear the proposed hotel site of any potential buried munitions and to examine any other environmental or legal concerns that need to be resolved before the project can move ahead.

The document, dated Dec. 30, notes that the government “has the right to revoke the decision,” depending on the final project negotiations.

Mr. Kushner’s firm has said the plan is to build a five-star “eco-resort community” on the island by turning a “former military base into a vibrant international destination for hospitality and wellness.”

Ivanka Trump, Mr. Trump’s daughter, has said she is helping with the project as well. “We will execute on it,” she said about the project, during a podcast last year.

Advertisement

This project is just one of two major real-estate deals that Mr. Kushner is pursuing along with Mr. Abehsera that involve foreign governments.

Separately, the partnership received preliminary approval last year to build a luxury hotel complex in Belgrade, Serbia, in the former ministry of defense building, which has sat empty for decades after it was bombed by NATO in 1999 during a war there.

Serbia and Albania have foreign policy matters pending with the United States, as both countries seek continued U.S. support for their long-stalled efforts to join the European Union, and officials in Washington are trying to convince Serbia to tighten ties with the United States, instead of Russia.

Virginia Canter, who served as White House ethics lawyer during the Obama and Clinton administrations and also an ethics adviser to the International Monetary Fund, said even if there was no attempt to gain influence with Mr. Trump, any government deal involving his family creates that impression.

“It all looks like favoritism, like they are providing access to Kushner because they want to be on the good side of Trump,” Ms. Canter said, now with State Democracy Defenders Fund, a group that tracks federal government corruption and ethics issues.

Advertisement
Continue Reading

Trending