Business
Israel’s A.I. Experiments in Gaza War Raise Ethical Concerns
In late 2023, Israel was aiming to assassinate Ibrahim Biari, a top Hamas commander in the northern Gaza Strip who had helped plan the Oct. 7 massacres. But Israeli intelligence could not find Mr. Biari, who they believed was hidden in the network of tunnels underneath Gaza.
So Israeli officers turned to a new military technology infused with artificial intelligence, three Israeli and American officials briefed on the events said. The technology was developed a decade earlier but had not been used in battle. Finding Mr. Biari provided new incentive to improve the tool, so engineers in Israel’s Unit 8200, the country’s equivalent of the National Security Agency, soon integrated A.I. into it, the people said.
Shortly thereafter, Israel listened to Mr. Biari’s calls and tested the A.I. audio tool, which gave an approximate location for where he was making his calls. Using that information, Israel ordered airstrikes to target the area on Oct. 31, 2023, killing Mr. Biari. More than 125 civilians also died in the attack, according to Airwars, a London-based conflict monitor.
The audio tool was just one example of how Israel has used the war in Gaza to rapidly test and deploy A.I.-backed military technologies to a degree that had not been seen before, according to interviews with nine American and Israeli defense officials, who spoke on the condition of anonymity because the work is confidential.
In the past 18 months, Israel has also combined A.I. with facial recognition software to match partly obscured or injured faces to real identities, turned to A.I. to compile potential airstrike targets, and created an Arabic-language A.I. model to power a chatbot that could scan and analyze text messages, social media posts and other Arabic-language data, two people with knowledge of the programs said.
Many of these efforts were a partnership between enlisted soldiers in Unit 8200 and reserve soldiers who work at tech companies such as Google, Microsoft and Meta, three people with knowledge of the technologies said. Unit 8200 set up what became known as “The Studio,” an innovation hub and place to match experts with A.I. projects, the people said.
Yet even as Israel raced to develop the A.I. arsenal, deployment of the technologies sometimes led to mistaken identifications and arrests, as well as civilian deaths, the Israeli and American officials said. Some officials have struggled with the ethical implications of the A.I. tools, which could result in increased surveillance and other civilian killings.
No other nation has been as active as Israel in experimenting with A.I. tools in real-time battles, European and American defense officials said, giving a preview of how such technologies may be used in future wars — and how they might also go awry.
“The urgent need to cope with the crisis accelerated innovation, much of it A.I.-powered,” said Hadas Lorber, the head of the Institute for Applied Research in Responsible A.I. at Israel’s Holon Institute of Technology and a former senior director at the Israeli National Security Council. “It led to game-changing technologies on the battlefield and advantages that proved critical in combat.”
But the technologies “also raise serious ethical questions,” Ms. Lorber said. She warned that A.I. needs checks and balances, adding that humans should make the final decisions.
A spokeswoman for Israel’s military said she could not comment on specific technologies because of their “confidential nature.” Israel “is committed to the lawful and responsible use of data technology tools,” she said, adding that the military was investigating the strike on Mr. Biari and was “unable to provide any further information until the investigation is complete.”
Meta and Microsoft declined to comment. Google said it has “employees who do reserve duty in various countries around the world. The work those employees do as reservists is not connected to Google.”
Israel previously used conflicts in Gaza and Lebanon to experiment with and advance tech tools for its military, such as drones, phone hacking tools and the Iron Dome defense system, which can help intercept short-range ballistic missiles.
After Hamas launched cross-border attacks into Israel on Oct. 7, 2023, killing more than 1,200 people and taking 250 hostages, A.I. technologies were quickly cleared for deployment, four Israeli officials said. That led to the cooperation between Unit 8200 and reserve soldiers in “The Studio” to swiftly develop new A.I. capabilities, they said.
Avi Hasson, the chief executive of Startup Nation Central, an Israeli nonprofit that connects investors with companies, said reservists from Meta, Google and Microsoft had become crucial in driving innovation in drones and data integration.
“Reservists brought know-how and access to key technologies that weren’t available in the military,” he said.
Israel’s military soon used A.I. to enhance its drone fleet. Aviv Shapira, founder and chief executive of XTEND, a software and drone company that works with the Israeli military, said A.I.-powered algorithms were used to build drones to lock on and track targets from a distance.
“In the past, homing capabilities relied on zeroing in on to an image of the target,” he said. “Now A.I. can recognize and track the object itself — may it be a moving car, or a person — with deadly precision.”
Mr. Shapira said his main clients, the Israeli military and the U.S. Department of Defense, were aware of A.I.’s ethical implications in warfare and discussed responsible use of the technology.
One tool developed by “The Studio” was an Arabic-language A.I. model known as a large language model, three Israeli officers familiar with the program said. (The large language model was earlier reported by Plus 972, an Israeli-Palestinian news site.)
Developers previously struggled to create such a model because of a dearth of Arabic-language data to train the technology. When such data was available, it was mostly in standard written Arabic, which is more formal than the dozens of dialects used in spoken Arabic.
The Israeli military did not have that problem, the three officers said. The country had decades of intercepted text messages, transcribed phone calls and posts scraped from social media in spoken Arabic dialects. So Israeli officers created the large language model in the first few months of the war and built a chatbot to run queries in Arabic. They merged the tool with multimedia databases, allowing analysts to run complex searches across images and videos, four Israeli officials said.
When Israel assassinated the Hezbollah leader Hassan Nasrallah in September, the chatbot analyzed the responses across the Arabic-speaking world, three Israeli officers said. The technology differentiated among different dialects in Lebanon to gauge public reaction, helping Israel to assess if there was public pressure for a counterstrike.
At times, the chatbot could not identify some modern slang terms and words that were transliterated from English to Arabic, two officers said. That required Israeli intelligence officers with expertise in different dialects to review and correct its work, one of the officers said.
The chatbot also sometimes provided wrong answers — for instance, returning photos of pipes instead of guns — two Israeli intelligence officers said. Even so, the A.I. tool significantly accelerated research and analysis, they said.
At temporary checkpoints set up between the northern and southern Gaza Strip, Israel also began equipping cameras after the Oct. 7 attacks with the ability to scan and send high-resolution images of Palestinians to an A.I.-backed facial recognition program.
This system, too, sometimes had trouble identifying people whose faces were obscured. That led to arrests and interrogations of Palestinians who were mistakenly flagged by the facial recognition system, two Israeli intelligence officers said.
Israel also used A.I. to sift through data amassed by intelligence officials on Hamas members. Before the war, Israel built a machine-learning algorithm — code-named “Lavender” — that could quickly sort data to hunt for low-level militants. It was trained on a database of confirmed Hamas members and meant to predict who else might be part of the group. Though the system’s predictions were imperfect, Israel used it at the start of the war in Gaza to help choose attack targets.
Few goals loomed larger than finding and eliminating Hamas’s senior leadership. Near the top of the list was Mr. Biari, the Hamas commander who Israeli officials believed played a central role in planning the Oct. 7 attacks.
Israel’s military intelligence quickly intercepted Mr. Biari’s calls with other Hamas members but could not pinpoint his location. So they turned to the A.I.-backed audio tool, which analyzed different sounds, such as sonic bombs and airstrikes.
After deducing an approximate location for where Mr. Biari was placing his calls, Israeli military officials were warned that the area, which included several apartment complexes, was densely populated, two intelligence officers said. An airstrike would need to target several buildings to ensure Mr. Biari was assassinated, they said. The operation was greenlit.
Since then, Israeli intelligence has also used the audio tool alongside maps and photos of Gaza’s underground tunnel maze to locate hostages. Over time, the tool was refined to more precisely find individuals, two Israeli officers said.
Business
As Netflix and Paramount circle Warner Bros. Discovery, Hollywood unions voice alarm
The sale of Warner Bros. — whether in pieces to Netflix or in its entirety to Paramount — is stirring mounting worries among Hollywood union leaders about the possible fallout for their members.
Unions representing writers, directors, actors and crew workers have voiced growing concerns that further consolidation in the media industry will reduce competition, potentially causing studios to pay less for content, and make it more difficult for people to find work.
“We’ve seen this movie before, and we know how it ends,” said Michele Mulroney, president of the Writers Guild of America West. “There are lots of promises made that one plus one is going to equal three. But it’s very hard to envision how two behemoths, for example, Warner Bros. and Netflix … can keep up the level of output they currently have.”
Last week, Netflix announced it agreed to buy Warner Bros. Discovery’s film and TV studio, Burbank lot, HBO and HBO Max for $27.75 a share, or $72 billion. It also agreed to take on more than $10 billion of Warner Bros.’ debt. But Paramount, whose previous offers were rebuffed by Warner Bros., has appealed directly to shareholders with an alternative bid to buy all of the company for about $78 billion.
Paramount said it will have more than $6 billion in cuts over three years, while also saying the combined companies will release at least 30 movies a year. Netflix said it expects its deal will have $2 billion to $3 billion in cost cuts.
Those cuts are expected to trigger thousands of layoffs across Hollywood, which has already been squeezed by the flight of production overseas and a contraction in the once booming TV business.
Mulroney said that employment for WGA writers in episodic television is down as much as 40% when comparing the 2023-2024 writing season to 2022-2023.
Executives from both companies have said their deals would benefit creative talent and consumers.
But Hollywood union leaders are skeptical.
“We can hear the generalizations all day long, but it doesn’t really mean anything unless it’s on paper, and we just don’t know if these companies are even prepared to make promises in writing,” said Lindsay Dougherty, Teamsters at-large vice president and principal officer for Local 399, which represents drivers, location managers and casting directors.
Dougherty said the Teamsters have been engaged with both Netflix and Paramount, seeking commitments to keep filming in Los Angeles.
“We have a lot of members that are struggling to find work, or haven’t really worked in the last year or so,” Dougherty said.
Mulroney said her union has concerns about both bids, either by Netflix or Paramount.
“We don’t think the merger is inevitable,” Mulroney said. “We think there’s an opportunity to push back here.”
If Netflix were to buy Warner Bros.’ TV and film businesses, Mulroney said that could further undermine the theatrical business.
“It’s hard to imagine them fully embracing theatrical exhibition,” Mulroney said. “The exhibition business has been struggling to get back on its feet ever since the pandemic, so a move like this could really be existential.”
But the Writers Guild also has issues with Paramount’s bid, Mulroney said, noting that it would put Paramount-owned CBS News and CNN under the same parent company.
“We have censorship concerns,” Mulroney said. “We saw issues around [Stephen] Colbert and [Jimmy] Kimmel. We’re concerned about what the news would look like under single ownership here.”
That question was made more salient this week after President Trump, who has for years harshly criticized CNN’s hosts and news coverage, said he believes CNN should be sold.
The worries come as some unions’ major studio contracts, including the DGA, WGA and performers guild SAG-AFTRA, are set to expire next year. Two years ago, writers and actors went on a prolonged strike to push for more AI protections and better wages and benefits.
The Directors Guild of America and performers union SAG-AFTRA have voiced similar objections to the pending media consolidation.
“A deal that is in the interest of SAG-AFTRA members and all other workers in the entertainment industry must result in more creation and more production, not less,” the union said.
SAG-AFTRA National Executive Director Duncan Crabtree-Ireland said the union has been in discussions with both Paramount and Netflix.
“It is as yet unclear what path forward is going to best protect the legacy that Warner Brothers presents, and that’s something that we’re very actively investigating right now,” he said.
It’s not clear, however, how much influence the unions will have in the outcome.
“They just don’t have a seat at the ultimate decision making table,” said David Smith, a professor of economics at the Pepperdine Graziadio Business School. “I expect their primary involvement could be through creating more awareness of potential challenges with a merger and potentially more regulatory scrutiny … I think that’s what they’re attempting to do.”
Business
Investor pleads guilty in criminal case that felled hedge fund, damaged B. Riley
Businessman Brian Kahn has pleaded guilty to conspiracy to commit securities fraud in a case that brought down a hedge fund, helped lead to the bankruptcy of a retailer and damaged West Los Angeles investment bank B. Riley Financial.
Kahn, 52, admitted in a Trenton, N.J., federal court Wednesday to hiding trading losses that brought down Prophecy Asset Management in 2020. The Securities and Exchange Commission alleged the losses exceeded $400 million.
An investor lawsuit has accused Kahn of funneling some of the fund’s money to Franchise Group, a Delaware retail holding company assembled by the investor that owned Vitamin Shoppe, Pet Supplies Plus and other chains.
B. Riley provided $600 million through debt it raised to finance a $2.8-billion management buyout led by Kahn in 2023. It also took a 31% stake in the company and lent Kahn’s investment fund $201 million, largely secured with shares of Franchise Group.
Kahn had done deals with B. Riley co-founder Bryant Riley before partnering with the L.A. businessman on Franchise Group.
However, the buyout didn’t work out amid fallout from the hedge fund scandal and slowing sales at the retailers. Franchise Group filed for bankruptcy in November 2024. A slimmed-down version of the company emerged from Chapter 11 in June.
B. Riley has disclosed in regulatory filings that the firm and Riley have received SEC subpoenas regarding its dealings with Kahn, Franchise group and other matters.
Riley, 58, the firm’s chairman and co-chief executive, has denied knowledge of wrongdoing, and an outside law firm reached the same conclusion.
The failed deal led to huge losses at the financial services firm that pummeled B. Riley’s stock, which had approached $90 in 2021. Shares were trading Friday at $3.98.
The company has marked down its Franchise Group investment, and has spent the last year or so paring debt through refinancing, selling off parts of its business and other steps, including closing offices.
The company announced last month it is changing its name to BRC Group Holdings in January. It did not immediately respond to requests for comment.
At Wednesday’s plea hearing, Assistant U.S. Atty. Kelly Lyons said that Kahn conspired to “defraud dozens of investors who had invested approximately $360 million” through “lies, deception, misleading statements and material omissions.”
U.S. District Judge Michael Shipp released Kahn on a $100,000 bond and set an April 2 sentencing date. He faces up to five years in prison. Kahn, his lawyer and Lyons declined to comment after the hearing.
Kahn is the third Prophecy official charged over the hedge fund’s collapse. Two other executives, John Hughes and Jeffrey Spotts, have also been charged.
Hughes pleaded guilty and is cooperating with prosecutors. Spotts pleaded not guilty and faces trial next year. The two men and Kahn also have been sued by the SEC over the Prophecy collapse.
Bloomberg News contributed to this report.
Business
Podcast industry is divided as AI bots flood the airways with thousands of programs
Chatty bots are sharing their hot takes through hundreds of thousands of AI-generated podcasts. And the invasion has just begun.
Though their banter can be a bit banal, the AI podcasters’ confidence and research are now arguably better than most people’s.
“We’ve just begun to cross the threshold of voice AI being pretty much indistinguishable from human,” said Alan Cowen, chief executive of Hume AI, a startup specializing in voice technology. “We’re seeing creators use it in all kinds of ways.”
AI can make podcasts sound better and cost less, industry insiders say, but the growing swarm of new competitors entering an already crowded market is disrupting the industry.
Some podcasters are pushing back, requesting restrictions. Others are already cloning their voices and handing over their podcasts to AI bots.
Popular podcast host Steven Bartlett has used an AI clone to launch a new kind of content aimed at the 13 million followers of his podcast “Diary of a CEO.” On YouTube, his clone narrates “100 CEOs With Steven Bartlett,” which adds AI-generated animation to Bartlett’s cloned voice to tell the life stories of entrepreneurs such as Steve Jobs and Richard Branson.
Erica Mandy, the Redondo Beach-based host of the daily news podcast called “The Newsworthy,” let an AI voice fill in for her earlier this year after she lost her voice from laryngitis and her backup host bailed out.
She fed her script into a text-to-speech model and selected a female AI voice from ElevenLabs to speak for her.
“I still recorded the show with my very hoarse voice, but then put the AI voice over that, telling the audience from the very beginning, I’m sick,” Mandy said.
Mandy had previously used ElevenLabs for its voice isolation feature, which uses AI to remove ambient noise from interviews.
Her chatbot host elicited mixed responses from listeners. Some asked if she was OK. One fan said she should never do it again. Most weren’t sure what to think.
“A lot of people were like, ‘That was weird,’” Mandy said.
In podcasting, many listeners feel strong bonds to hosts they listen to regularly. The slow encroachment of AI voices for one-off episodes, canned ad reads, sentence replacement in postproduction or translation into multiple languages has sparked anger as well as curiosity from both creators and consumers of the content.
Augmenting or replacing host reads with AI is perceived by many as a breach of trust and as trivializing the human connection listeners have with hosts, said Megan Lazovick, vice president of Edison Research, a podcast research company.
Jason Saldanha of PRX, a podcast network that represents human creators such as Ezra Klein, said the tsunami of AI podcasts won’t attract premium ad rates.
“Adding more podcasts in a tyranny of choice environment is not great,” he said. “I’m not interested in devaluing premium.”
Still, platforms such as YouTube and Spotify have introduced features for creators to clone their voice and translate their content into multiple languages to increase reach and revenue. A new generation of voice cloning companies, many with operations in California, offers better emotion, tone, pacing and overall voice quality.
Hume AI, which is based in New York but has a big research team in California, raised $50 million last year and has tens of thousands of creators using its software to generate audiobooks, podcasts, films, voice-overs for videos and dialogue generation in video games.
“We focus our platform on being able to edit content so that you can take in postproduction an existing podcast and regenerate a sentence in the same voice, with the same prosody or emotional intonation using instant cloning,” said company CEO Cowen.
Some are using the tech to carpet-bomb the market with content.
Los Angeles podcasting studio Inception Point AI has produced its 200,000 podcast episodes, accounting for 1% of all podcasts published on the internet, according to CEO Jeanine Wright.
The podcasts are so cheap to make that they can focus on tiny topics, like local weather, small sports teams, gardening and other niche subjects.
Instead of a studio searching for a specific “hit” podcast idea, it takes just $1 to produce an episode so that they can be profitable with just 25 people listening.
“That means most of the stuff that we make, we have really an unlimited amount of experimentation and creative freedom for what we want to do,” Wright said.
One of its popular synthetic hosts is Vivian Steele, an AI celebrity gossip columnist with a sassy voice and a sharp tongue. “I am indeed AI-powered — which means I’ve got receipts older than your grandmother’s jewelry box, and a memory sharper than a stiletto heel on marble. No forgetting, no forgiving, and definitely no filter,” the AI discloses itself at the start of the podcast.
“We’ve kind of molded her more towards what the audience wants,” said Katie Brown, chief content officer at Inception Point, who helps design the personalities of the AI podcasters.
Inception Point has built a roster of more than 100 AI personalities whose characteristics, voices and likenesses are crafted for podcast audiences. Its AI hosts include Clare Delish, a cooking guidance expert, and garden enthusiast Nigel Thistledown.
The technology also makes it easy to get podcasts up quickly. Inception has found some success with flash biographies posted promptly in connection to people in the news. It uses AI software to spot a trending personality and create two episodes, complete with promo art and a trailer.
When Charlie Kirk was shot, its AI immediately created two shows called “Charlie Kirk Death” and “Charlie Kirk Manhunt” as a part of the biography series.
“We were able to create all of that content, each with different angles, pulling from different news sources, and we were able to get that content up within an hour,” Wright said.
Speed is key when it comes to breaking news, so its AI podcasts reached the top of some charts.
“Our content was coming up, really dominating the list of what people were searching for,” she said.
Across Apple and Spotify, Inception Point podcasts have now garnered 400,000 subscribers.
-
Alaska7 days agoHowling Mat-Su winds leave thousands without power
-
Texas7 days agoTexas Tech football vs BYU live updates, start time, TV channel for Big 12 title
-
Ohio1 week ago
Who do the Ohio State Buckeyes hire as the next offensive coordinator?
-
Washington4 days agoLIVE UPDATES: Mudslide, road closures across Western Washington
-
Iowa6 days agoMatt Campbell reportedly bringing longtime Iowa State staffer to Penn State as 1st hire
-
Miami, FL6 days agoUrban Meyer, Brady Quinn get in heated exchange during Alabama, Notre Dame, Miami CFP discussion
-
Cleveland, OH6 days agoMan shot, killed at downtown Cleveland nightclub: EMS
-
World5 days ago
Chiefs’ offensive line woes deepen as Wanya Morris exits with knee injury against Texans