Business
Israel’s A.I. Experiments in Gaza War Raise Ethical Concerns
In late 2023, Israel was aiming to assassinate Ibrahim Biari, a top Hamas commander in the northern Gaza Strip who had helped plan the Oct. 7 massacres. But Israeli intelligence could not find Mr. Biari, who they believed was hidden in the network of tunnels underneath Gaza.
So Israeli officers turned to a new military technology infused with artificial intelligence, three Israeli and American officials briefed on the events said. The technology was developed a decade earlier but had not been used in battle. Finding Mr. Biari provided new incentive to improve the tool, so engineers in Israel’s Unit 8200, the country’s equivalent of the National Security Agency, soon integrated A.I. into it, the people said.
Shortly thereafter, Israel listened to Mr. Biari’s calls and tested the A.I. audio tool, which gave an approximate location for where he was making his calls. Using that information, Israel ordered airstrikes to target the area on Oct. 31, 2023, killing Mr. Biari. More than 125 civilians also died in the attack, according to Airwars, a London-based conflict monitor.
The audio tool was just one example of how Israel has used the war in Gaza to rapidly test and deploy A.I.-backed military technologies to a degree that had not been seen before, according to interviews with nine American and Israeli defense officials, who spoke on the condition of anonymity because the work is confidential.
In the past 18 months, Israel has also combined A.I. with facial recognition software to match partly obscured or injured faces to real identities, turned to A.I. to compile potential airstrike targets, and created an Arabic-language A.I. model to power a chatbot that could scan and analyze text messages, social media posts and other Arabic-language data, two people with knowledge of the programs said.
Many of these efforts were a partnership between enlisted soldiers in Unit 8200 and reserve soldiers who work at tech companies such as Google, Microsoft and Meta, three people with knowledge of the technologies said. Unit 8200 set up what became known as “The Studio,” an innovation hub and place to match experts with A.I. projects, the people said.
Yet even as Israel raced to develop the A.I. arsenal, deployment of the technologies sometimes led to mistaken identifications and arrests, as well as civilian deaths, the Israeli and American officials said. Some officials have struggled with the ethical implications of the A.I. tools, which could result in increased surveillance and other civilian killings.
No other nation has been as active as Israel in experimenting with A.I. tools in real-time battles, European and American defense officials said, giving a preview of how such technologies may be used in future wars — and how they might also go awry.
“The urgent need to cope with the crisis accelerated innovation, much of it A.I.-powered,” said Hadas Lorber, the head of the Institute for Applied Research in Responsible A.I. at Israel’s Holon Institute of Technology and a former senior director at the Israeli National Security Council. “It led to game-changing technologies on the battlefield and advantages that proved critical in combat.”
But the technologies “also raise serious ethical questions,” Ms. Lorber said. She warned that A.I. needs checks and balances, adding that humans should make the final decisions.
A spokeswoman for Israel’s military said she could not comment on specific technologies because of their “confidential nature.” Israel “is committed to the lawful and responsible use of data technology tools,” she said, adding that the military was investigating the strike on Mr. Biari and was “unable to provide any further information until the investigation is complete.”
Meta and Microsoft declined to comment. Google said it has “employees who do reserve duty in various countries around the world. The work those employees do as reservists is not connected to Google.”
Israel previously used conflicts in Gaza and Lebanon to experiment with and advance tech tools for its military, such as drones, phone hacking tools and the Iron Dome defense system, which can help intercept short-range ballistic missiles.
After Hamas launched cross-border attacks into Israel on Oct. 7, 2023, killing more than 1,200 people and taking 250 hostages, A.I. technologies were quickly cleared for deployment, four Israeli officials said. That led to the cooperation between Unit 8200 and reserve soldiers in “The Studio” to swiftly develop new A.I. capabilities, they said.
Avi Hasson, the chief executive of Startup Nation Central, an Israeli nonprofit that connects investors with companies, said reservists from Meta, Google and Microsoft had become crucial in driving innovation in drones and data integration.
“Reservists brought know-how and access to key technologies that weren’t available in the military,” he said.
Israel’s military soon used A.I. to enhance its drone fleet. Aviv Shapira, founder and chief executive of XTEND, a software and drone company that works with the Israeli military, said A.I.-powered algorithms were used to build drones to lock on and track targets from a distance.
“In the past, homing capabilities relied on zeroing in on to an image of the target,” he said. “Now A.I. can recognize and track the object itself — may it be a moving car, or a person — with deadly precision.”
Mr. Shapira said his main clients, the Israeli military and the U.S. Department of Defense, were aware of A.I.’s ethical implications in warfare and discussed responsible use of the technology.
One tool developed by “The Studio” was an Arabic-language A.I. model known as a large language model, three Israeli officers familiar with the program said. (The large language model was earlier reported by Plus 972, an Israeli-Palestinian news site.)
Developers previously struggled to create such a model because of a dearth of Arabic-language data to train the technology. When such data was available, it was mostly in standard written Arabic, which is more formal than the dozens of dialects used in spoken Arabic.
The Israeli military did not have that problem, the three officers said. The country had decades of intercepted text messages, transcribed phone calls and posts scraped from social media in spoken Arabic dialects. So Israeli officers created the large language model in the first few months of the war and built a chatbot to run queries in Arabic. They merged the tool with multimedia databases, allowing analysts to run complex searches across images and videos, four Israeli officials said.
When Israel assassinated the Hezbollah leader Hassan Nasrallah in September, the chatbot analyzed the responses across the Arabic-speaking world, three Israeli officers said. The technology differentiated among different dialects in Lebanon to gauge public reaction, helping Israel to assess if there was public pressure for a counterstrike.
At times, the chatbot could not identify some modern slang terms and words that were transliterated from English to Arabic, two officers said. That required Israeli intelligence officers with expertise in different dialects to review and correct its work, one of the officers said.
The chatbot also sometimes provided wrong answers — for instance, returning photos of pipes instead of guns — two Israeli intelligence officers said. Even so, the A.I. tool significantly accelerated research and analysis, they said.
At temporary checkpoints set up between the northern and southern Gaza Strip, Israel also began equipping cameras after the Oct. 7 attacks with the ability to scan and send high-resolution images of Palestinians to an A.I.-backed facial recognition program.
This system, too, sometimes had trouble identifying people whose faces were obscured. That led to arrests and interrogations of Palestinians who were mistakenly flagged by the facial recognition system, two Israeli intelligence officers said.
Israel also used A.I. to sift through data amassed by intelligence officials on Hamas members. Before the war, Israel built a machine-learning algorithm — code-named “Lavender” — that could quickly sort data to hunt for low-level militants. It was trained on a database of confirmed Hamas members and meant to predict who else might be part of the group. Though the system’s predictions were imperfect, Israel used it at the start of the war in Gaza to help choose attack targets.
Few goals loomed larger than finding and eliminating Hamas’s senior leadership. Near the top of the list was Mr. Biari, the Hamas commander who Israeli officials believed played a central role in planning the Oct. 7 attacks.
Israel’s military intelligence quickly intercepted Mr. Biari’s calls with other Hamas members but could not pinpoint his location. So they turned to the A.I.-backed audio tool, which analyzed different sounds, such as sonic bombs and airstrikes.
After deducing an approximate location for where Mr. Biari was placing his calls, Israeli military officials were warned that the area, which included several apartment complexes, was densely populated, two intelligence officers said. An airstrike would need to target several buildings to ensure Mr. Biari was assassinated, they said. The operation was greenlit.
Since then, Israeli intelligence has also used the audio tool alongside maps and photos of Gaza’s underground tunnel maze to locate hostages. Over time, the tool was refined to more precisely find individuals, two Israeli officers said.
Business
Sora app’s hyperreal AI videos ignite online trust crisis as downloads surge
Scrolling through the Sora app can feel a bit like entering a real-life multiverse.
Michael Jackson performs standup; the alien from the “Predator” movies flips burgers at McDonald’s; a home security camera captures a moose crashing through the glass door; Queen Elizabeth dives from the top of a table at a pub.
Such improbable realities, fantastical futures, and absurdist videos are the mainstay of the Sora app, a new short video app released by ChatGPT maker OpenAI.
The continuous stream of hyperreal, short-form videos made by artificial intelligence is mind-bending and mesmerizing at first. But it quickly triggers a new need to second-guess every piece of content as real or fake.
“The biggest risk with Sora is that it makes plausible deniability impossible to overcome, and that it erodes confidence in our ability to discern authentic from synthetic,” said Sam Gregory, an expert on deepfakes and executive director at WITNESS, a human rights organization. “Individual fakes matter, but the real damage is a fog of doubt settling over everything we see,”
All videos on the Sora app are entirely AI-generated, and there is no option to share real footage. But from the first week of its launch, users were sharing their Sora videos across all types of social media.
Less than a week after its launch Sept. 30, the Sora app crossed a million downloads, outpacing the initial growth of ChatGPT. Sora also reached the top of the App Store in the U.S. For now, the Sora app is available only to iOS users in the United States, and people cannot access it unless they have an invitation code.
To use the app, people have to scan their faces and read out three numbers displayed on screen for the system to capture a voice signature. Once that’s done, users can type a custom text prompt and create hyperreal 10-second videos complete with background sound and dialogue.
Through a feature called “Cameos,” users can superimpose their face or a friend’s face into any existing video. Though all outputs carry a visible watermark, numerous websites now offer watermark removal for Sora videos.
At launch, OpenAI took a lax approach to enforcing copyright restrictions and allowed the re-creation of copyrighted material by default, unless the owners opted out.
Users began generating AI video featuring characters from such titles as “SpongeBob SquarePants,” “South Park,” and “Breaking Bad,” and videos styled after the game show “The Price Is Right,” and the ‘90s sitcom “Friends.”
Then came the re-creation of dead celebrities, including Tupac Shakur roaming the streets in Cuba, Hitler facing off with Michael Jackson, and remixes of the Rev. Martin Luther King Jr. delivering his iconic “I Have A Dream” speech — but calling for freeing the disgraced rapper Diddy.
“Please, just stop sending me AI videos of Dad,” Zelda Williams, daughter of late comedian Robin Williams, posted on Instagram. “You’re not making art, you’re making disgusting, over-processed hot dogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else’s throat, hoping they’ll give you a little thumbs up and like it. Gross.”
Other dead celebrity re-creations, including Kobe Bryant, Stephen Hawking and President Kennedy, created on Sora have been cross-posted on social media websites, garnering millions of views.
Christina Gorski, director of communications at Fred Rogers Productions, said that Rogers’ family was “frustrated by the AI videos misrepresenting Mister Rogers being circulated online.”
Videos of Mr. Rogers holding a gun, greeting rapper Tupac, and other satirical fake situations have been shared widely on Sora.
“The videos are in direct contradiction to the careful intentionality and adherence to core child development principles that Fred Rogers brought to every episode of Mister Rogers’ Neighborhood. We have contacted OpenAI to request that the voice and likeness of Mister Rogers be blocked for use on the Sora platform, and we would expect them and other AI platforms to respect personal identities in the future,” Gorski said in a statement to The Times.
Hollywood talent agencies and unions, including SAG-AFTRA, have started to accuse OpenAI of improper use of likenesses. The central tension boils down to control over the use of the likenesses of actors and licensed characters — and fair compensation for use in AI videos.
In the aftermath of Hollywood’s concerns over copyright, Sam Altman shared a blog post, promising greater control for rights-holders to specify how their characters can be used in AI videos — and is exploring ways to share revenue with rights-holders.
He also said that studios could now “opt-in” for their characters to be used in AI re-creations, a reversal from OpenAI’s original stance of an opt-out regime.
The future, according to Altman, is heading toward creating personalized content for an audience of a few — or an audience of one.
“Creativity could be about to go through a Cambrian explosion, and along with it, the quality of art and entertainment can drastically increase,” Altman wrote, calling this genre of engagement “interactive fan fiction.”
The estates of dead actors, however, are racing to protect their likeness in the age of AI.
CMG Worldwide, which represents the estates of deceased celebrities, struck a partnership with deepfake detection company Loti AI to protect CMG’s rosters of actors and estates from unauthorized digital use.
Loti AI will constantly monitor for AI impersonations of 20 personalities represented by CMG, including Burt Reynolds, Christopher Reeve, Mark Twain and Rosa Parks.
“Since the launch of Sora 2, for example, our signups have increased roughly 30x as people search for ways to regain control over their digital likeness,” said Luke Arrigoni, co-founder and CEO of Loti AI.
Since January, Loti AI said it has removed thousands of instances of unauthorized content as new AI tools made it easier than ever to create and spread deepfakes.
After numerous “disrespectful depictions” of Martin Luther King Jr., OpenAI said it was pausing the generation of videos in the civil rights icon’s image on Sora, at the request of King’s estate. While there are strong free-speech interests in depicting historical figures, public figures and their families should ultimately have control over how their likeness is used, OpenAI said in a post.
Now, authorized representatives or estate owners can request that their likenesses not be used in Sora cameos.
As legal pressure mounts, Sora has become more strict about when it will allow the re-creation of copyrighted characters. It increasingly puts up content policy violations notices.
Now, creating Disney characters or other images triggers a content policy violation warning. Users who aren’t fans of the restrictions have started creating video memes about the content policy violation warnings.
There’s a growing virality to what has been dubbed “AI slop.”
Last week featured ring camera footage of a grandmother chasing a crocodile at the door, and a series of “fat olympics” videos where obese people participate in athletic events such as pole vault, swimming and track events.
Dedicated slop factories have turned the engagement into a money spinner, generating a constant stream of videos that are hard to look away from. One pithy tech commentator dubbed it “Cocomelon for adults.”
Even with increasing protections for celebrity likenesses, critics warn that the casual “likeness appropriation” of any common person or situation could lead to public confusion, enhance misinformation and erode public trust.
Meanwhile, even as the technology is being used by bad actors and even some governments for propaganda and promotion of certain political views, people in power can hide behind the flood of fake news by claiming that even real proof was generated by AI, said Gregory of WITNESS.
“I’m concerned about the ability to fabricate protest footage, stage false atrocities, or insert real people with words placed in their mouths into compromising scenarios,” he said.
Business
After ‘Megalopolis’ flops, Francis Ford Coppola puts his pricey watch collection up for auction
Francis Ford Coppola wants an offer he can’t refuse — on his timepieces.
The Academy Award-winning director is selling seven watches from his personal collection, including his custom F.P. Journe FFC Prototype, estimated to sell for more than $1 million, according to a statement from Phillips, the New York City-based auction house. Phillips will hold the auction on Dec. 6 and 7.
The sale could help stanch losses from last year’s box-office flop “Megalopolis,” which cost over $120 million to make and was largely financed by the 86-year-old director. The movie grossed only $14.3 million worldwide.
The film, Coppola’s first since his 2011 horror movie “Twixt,” premiered at Cannes last year to largely negative reviews. The Times’ Joshua Rothkopf called it a “wildly ambitious, overstuffed city epic.”
At a news conference at Cannes, Coppola discussed the tremendous amount of his own money that he had sunk into the film, saying that he “never cared about money” and that his children “don’t need a fortune.”
Among the Coppola timepieces also going under the hammer are examples from Patek Philippe, Blancpain and IWC.
But the headlining piece is the F.P. Journe FFC Prototype that features a black titanium, human-like hand that resembles a steampunk gauntlet that articulates the hours when the fingers extend or retract.
Francis Ford Coppola’s custom F.P. Journe FFC timepiece uses a single hand to indicate all 12 hours.
(Phillips)
The watch was a collaboration between Coppola and master watchmaker François-Paul Journe that began following a conversation the pair had during a visit he made to the filmmaker’s Inglenook winery in Napa Valley in 2012.
Coppola asked Journe if a human hand had ever been used to mark time. That question sparked a years-long conversation during which the watchmaker grappled with how to indicate the 12 hours of the dial using just five fingers.
Journe found his inspiration in Ambroise Paré, a 16th century French barber surgeon and an innovator of prosthetic limbs in particular, including Le Petit Lorrain, a prosthetic hand made of iron and leather that featured hidden gears and springs enabling the fingers to move, not dissimilar to a watch mechanism.
“Speaking with Francis in 2012 and hearing his idea on the use of a human hand to indicate time inspired me to create a watch I never could have imagined myself. The challenge was formidable — exactly the type of watchmaking project I adore,” said Journe in a statement.
Journe eventually created six prototypes and delivered Coppola’s watch to him in 2021.
“I’m proud to fully support the sale of this watch through Phillips to fund the creation of his artistic masterpieces in filmmaking,” he said.
Coppola first became interested in the watchmaker when he gifted his wife Eleanor an F.P. Journe Chronomètre à Résonance in platinum with a white gold dial for Christmas in 2009, prompting the director to extend an invitation to Journe to visit him at his Napa winery.
Eleanor Coppola, a documentary filmmaker and writer, died in 2024 after 61 years of marriage. Her F.P. Journe timepiece is also part of the auction and is estimated to fetch between $120,000 to $240,000.
Business
Disney warns that ESPN, ABC and other channels could go dark on YouTube TV
Walt Disney Co. is alerting viewers that its channels may go dark on YouTube TV amid tense contract negotiations between the two television giants.
The companies are struggling to hammer out a new distribution deal on YouTube TV for Disney’s channels, including ABC, ESPN, FX, National Geographic and Disney Channel. YouTube TV has become one of the most popular U.S. pay-TV services, boasting about 10 million subscribers for its packages of traditional television channels.
Those customers risk losing Disney’s channels, including KABC-TV Channel 7 in Los Angeles and other ABC affiliates nationwide if the two companies fail to forge a new carriage agreement by Oct. 30, when their pact expires.
“Without an agreement, we’ll have to remove Disney’s content from YouTube TV,” the Google Inc.-owned television service said Thursday in a statement.
Disney began sounding the alarm by running messages on its TV channels to warn viewers about the blackout threat.
The Burbank entertainment company becomes the latest TV programmer to allege that the tech behemoth is throwing its weight around in contract negotiations.
In recent months, both Rupert Murdoch’s Fox Corp. and Comcast’s NBCUniversal publicly complained that Google’s YouTube TV was attempting to unfairly squeeze them in their separate talks. In the end, both Fox and NBCUniversal struck new carriage contracts without their channels going dark.
Univision wasn’t as fortunate. The smaller, Spanish-language media company’s networks went dark last month on YouTube TV when the two companies failed to reach a deal.
“For the fourth time in three months, Google’s YouTube TV is putting their subscribers at risk of losing the most valuable networks they signed up for,” a Disney spokesperson said Thursday in a statement. “This is the latest example of Google exploiting its position at the expense of their own customers.”
YouTube TV, for its part, alleged that Disney was the one making unreasonable demands.
“We’ve been working in good faith to negotiate a deal with Disney that pays them fairly for their content on YouTube TV,” a YouTube TV spokesperson said in a statement. “Unfortunately, Disney is proposing costly economic terms that would raise prices on YouTube TV customers and give our customers fewer choices, while benefiting Disney’s own live TV products – like Hulu + Live TV and, soon, Fubo.”
Disney’s Hulu + Live TV competes directly with YouTube TV by offering the same channels. Fubo is a sports streaming service that Disney is in the process of acquiring.
YouTube said if Disney channels remain “unavailable for an extended period of time,” it would offer its customers a $20 credit.
The contract tussle heightens tensions from earlier this year, when Disney’s former distribution chief, Justin Connolly, left in May to take a similar position at YouTube TV. Connolly had spent two decades at Disney and ESPN and Disney sued to block the move, but a judge allowed Connolly to take his new position.
YouTube TV launched in April 2017 for $35 a month. The package of channels now costs $82.99.
To attract more sports fans, YouTube TV took over the NFL Sunday Ticket premium sports package from DirecTV, which had been losing more than $100 million a year to maintain the NFL service. YouTube TV offers Sunday Ticket as a base plan add-on or as an individual channel on YouTube.
Last year, YouTube generated $54.2 billion in revenue, second only to Disney among television companies, according to research firm MoffettNathanson.
The dispute comes as NFL and college football is in full swing, with games on ABC and ESPN. The NBA season also tipped off this week and ESPN prominently features those games. ABC’s fall season began last month with fresh episodes of such favorite programs as “Dancing with the Stars” and “Abbott Elementary.”
ABC stations also air popular newscasts including “Good Morning America” and “World News Tonight with David Muir.” Many ABC stations, including in Los Angeles, run Sony’s “Wheel of Fortune” and “Jeopardy!”
“We invest significantly in our content and expect our partners to pay fair rates that recognize that value,” Disney said. “If we don’t reach a fair deal soon, YouTube TV customers will lose access to ESPN and ABC, and all our marquee programming – including the NFL, college football, NBA and NHL seasons – and so much more.”
-
New York3 days agoVideo: How Mamdani Has Evolved in the Mayoral Race
-
World6 days agoIsrael continues deadly Gaza truce breaches as US seeks to strengthen deal
-
News5 days agoVideo: Federal Agents Detain Man During New York City Raid
-
News5 days agoBooks about race and gender to be returned to school libraries on some military bases
-
Technology6 days agoAI girlfriend apps leak millions of private chats
-
Politics6 days agoTrump admin on pace to shatter deportation record by end of first year: ‘Just the beginning’
-
News6 days agoTrump news at a glance: president can send national guard to Portland, for now
-
Business6 days agoUnionized baristas want Olympics to drop Starbucks as its ‘official coffee partner’