Connect with us

Business

Israel’s A.I. Experiments in Gaza War Raise Ethical Concerns

Published

on

Israel’s A.I. Experiments in Gaza War Raise Ethical Concerns

In late 2023, Israel was aiming to assassinate Ibrahim Biari, a top Hamas commander in the northern Gaza Strip who had helped plan the Oct. 7 massacres. But Israeli intelligence could not find Mr. Biari, who they believed was hidden in the network of tunnels underneath Gaza.

So Israeli officers turned to a new military technology infused with artificial intelligence, three Israeli and American officials briefed on the events said. The technology was developed a decade earlier but had not been used in battle. Finding Mr. Biari provided new incentive to improve the tool, so engineers in Israel’s Unit 8200, the country’s equivalent of the National Security Agency, soon integrated A.I. into it, the people said.

Shortly thereafter, Israel listened to Mr. Biari’s calls and tested the A.I. audio tool, which gave an approximate location for where he was making his calls. Using that information, Israel ordered airstrikes to target the area on Oct. 31, 2023, killing Mr. Biari. More than 125 civilians also died in the attack, according to Airwars, a London-based conflict monitor.

The audio tool was just one example of how Israel has used the war in Gaza to rapidly test and deploy A.I.-backed military technologies to a degree that had not been seen before, according to interviews with nine American and Israeli defense officials, who spoke on the condition of anonymity because the work is confidential.

In the past 18 months, Israel has also combined A.I. with facial recognition software to match partly obscured or injured faces to real identities, turned to A.I. to compile potential airstrike targets, and created an Arabic-language A.I. model to power a chatbot that could scan and analyze text messages, social media posts and other Arabic-language data, two people with knowledge of the programs said.

Advertisement

Many of these efforts were a partnership between enlisted soldiers in Unit 8200 and reserve soldiers who work at tech companies such as Google, Microsoft and Meta, three people with knowledge of the technologies said. Unit 8200 set up what became known as “The Studio,” an innovation hub and place to match experts with A.I. projects, the people said.

Yet even as Israel raced to develop the A.I. arsenal, deployment of the technologies sometimes led to mistaken identifications and arrests, as well as civilian deaths, the Israeli and American officials said. Some officials have struggled with the ethical implications of the A.I. tools, which could result in increased surveillance and other civilian killings.

No other nation has been as active as Israel in experimenting with A.I. tools in real-time battles, European and American defense officials said, giving a preview of how such technologies may be used in future wars — and how they might also go awry.

“The urgent need to cope with the crisis accelerated innovation, much of it A.I.-powered,” said Hadas Lorber, the head of the Institute for Applied Research in Responsible A.I. at Israel’s Holon Institute of Technology and a former senior director at the Israeli National Security Council. “It led to game-changing technologies on the battlefield and advantages that proved critical in combat.”

But the technologies “also raise serious ethical questions,” Ms. Lorber said. She warned that A.I. needs checks and balances, adding that humans should make the final decisions.

Advertisement

A spokeswoman for Israel’s military said she could not comment on specific technologies because of their “confidential nature.” Israel “is committed to the lawful and responsible use of data technology tools,” she said, adding that the military was investigating the strike on Mr. Biari and was “unable to provide any further information until the investigation is complete.”

Meta and Microsoft declined to comment. Google said it has “employees who do reserve duty in various countries around the world. The work those employees do as reservists is not connected to Google.”

Israel previously used conflicts in Gaza and Lebanon to experiment with and advance tech tools for its military, such as drones, phone hacking tools and the Iron Dome defense system, which can help intercept short-range ballistic missiles.

After Hamas launched cross-border attacks into Israel on Oct. 7, 2023, killing more than 1,200 people and taking 250 hostages, A.I. technologies were quickly cleared for deployment, four Israeli officials said. That led to the cooperation between Unit 8200 and reserve soldiers in “The Studio” to swiftly develop new A.I. capabilities, they said.

Avi Hasson, the chief executive of Startup Nation Central, an Israeli nonprofit that connects investors with companies, said reservists from Meta, Google and Microsoft had become crucial in driving innovation in drones and data integration.

Advertisement

“Reservists brought know-how and access to key technologies that weren’t available in the military,” he said.

Israel’s military soon used A.I. to enhance its drone fleet. Aviv Shapira, founder and chief executive of XTEND, a software and drone company that works with the Israeli military, said A.I.-powered algorithms were used to build drones to lock on and track targets from a distance.

“In the past, homing capabilities relied on zeroing in on to an image of the target,” he said. “Now A.I. can recognize and track the object itself — may it be a moving car, or a person — with deadly precision.”

Mr. Shapira said his main clients, the Israeli military and the U.S. Department of Defense, were aware of A.I.’s ethical implications in warfare and discussed responsible use of the technology.

One tool developed by “The Studio” was an Arabic-language A.I. model known as a large language model, three Israeli officers familiar with the program said. (The large language model was earlier reported by Plus 972, an Israeli-Palestinian news site.)

Advertisement

Developers previously struggled to create such a model because of a dearth of Arabic-language data to train the technology. When such data was available, it was mostly in standard written Arabic, which is more formal than the dozens of dialects used in spoken Arabic.

The Israeli military did not have that problem, the three officers said. The country had decades of intercepted text messages, transcribed phone calls and posts scraped from social media in spoken Arabic dialects. So Israeli officers created the large language model in the first few months of the war and built a chatbot to run queries in Arabic. They merged the tool with multimedia databases, allowing analysts to run complex searches across images and videos, four Israeli officials said.

When Israel assassinated the Hezbollah leader Hassan Nasrallah in September, the chatbot analyzed the responses across the Arabic-speaking world, three Israeli officers said. The technology differentiated among different dialects in Lebanon to gauge public reaction, helping Israel to assess if there was public pressure for a counterstrike.

At times, the chatbot could not identify some modern slang terms and words that were transliterated from English to Arabic, two officers said. That required Israeli intelligence officers with expertise in different dialects to review and correct its work, one of the officers said.

The chatbot also sometimes provided wrong answers — for instance, returning photos of pipes instead of guns — two Israeli intelligence officers said. Even so, the A.I. tool significantly accelerated research and analysis, they said.

Advertisement

At temporary checkpoints set up between the northern and southern Gaza Strip, Israel also began equipping cameras after the Oct. 7 attacks with the ability to scan and send high-resolution images of Palestinians to an A.I.-backed facial recognition program.

This system, too, sometimes had trouble identifying people whose faces were obscured. That led to arrests and interrogations of Palestinians who were mistakenly flagged by the facial recognition system, two Israeli intelligence officers said.

Israel also used A.I. to sift through data amassed by intelligence officials on Hamas members. Before the war, Israel built a machine-learning algorithm — code-named “Lavender” — that could quickly sort data to hunt for low-level militants. It was trained on a database of confirmed Hamas members and meant to predict who else might be part of the group. Though the system’s predictions were imperfect, Israel used it at the start of the war in Gaza to help choose attack targets.

Few goals loomed larger than finding and eliminating Hamas’s senior leadership. Near the top of the list was Mr. Biari, the Hamas commander who Israeli officials believed played a central role in planning the Oct. 7 attacks.

Israel’s military intelligence quickly intercepted Mr. Biari’s calls with other Hamas members but could not pinpoint his location. So they turned to the A.I.-backed audio tool, which analyzed different sounds, such as sonic bombs and airstrikes.

Advertisement

After deducing an approximate location for where Mr. Biari was placing his calls, Israeli military officials were warned that the area, which included several apartment complexes, was densely populated, two intelligence officers said. An airstrike would need to target several buildings to ensure Mr. Biari was assassinated, they said. The operation was greenlit.

Since then, Israeli intelligence has also used the audio tool alongside maps and photos of Gaza’s underground tunnel maze to locate hostages. Over time, the tool was refined to more precisely find individuals, two Israeli officers said.

Business

WGA cancels Los Angeles awards show amid labor strike

Published

on

WGA cancels Los Angeles awards show amid labor strike

The Writers Guild of America West has canceled its awards ceremony scheduled to take place March 8 as its staff union members continue to strike, demanding higher pay and protections against artificial intelligence.

In a letter sent to members on Sunday, WGA West’s board of directors, including President Michele Mulroney, wrote, “The non-supervisory staff of the WGAW are currently on strike and the Guild would not ask our members or guests to cross a picket line to attend the awards show. The WGAW staff have a right to strike and our exceptional nominees and honorees deserve an uncomplicated celebration of their achievements.”

The New York ceremony, scheduled on the same day, is expected go forward while an alternative celebration for Los Angeles-based nominees will take place at a later date, according to the letter.

Comedian and actor Atsuko Okatsuka was set to host the L.A. show, while filmmaker James Cameron was to receive the WGA West Laurel Award.

WGA union staffers have been striking outside the guild’s Los Angeles headquarters on Fairfax Avenue since Feb. 17. The union alleged that management did not intend to reach an agreement on the pending contract. Further, it claimed that guild management had “surveilled workers for union activity, terminated union supporters, and engaged in bad faith surface bargaining.”

Advertisement

On Tuesday, the labor organization said that management had raised the specter of canceling the ceremony during a call about contraction negotiations.

“Make no mistake: this is an attempt by WGAW management to drive a wedge between WGSU and WGA membership when we should be building unity ahead of MBA [Minimum Basic Agreement] negotiations with the AMPTP [Alliance of Motion Picture and Television Producers],” wrote the staff union. “We urge Guild management to end this strike now,” the union wrote on Instagram.

The union, made up of more than 100 employees who work in areas including legal, communications and residuals, was formed last spring and first authorized a strike in January with 82% of its members. Contract negotiations, which began in September, have focused on the use of artificial intelligence, pay raises and “basic protections” including grievance procedures.

The WGA has said that it offered “comprehensive proposals with numerous union protections and improvements to compensation and benefits.”

The ceremony’s cancellation, coming just weeks before the Academy Awards, casts a shadow over the upcoming contraction negotiations between the WGA and the Alliance of Motion Picture and Television Producers, which represents the studios and streamers.

Advertisement

In 2023, the WGA went on a strike lasting 148 days, the second-longest strike in the union’s history.

Times staff writer Cerys Davies contributed to this report.

Advertisement
Continue Reading

Business

Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

Published

on

Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.

Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.

“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn’t that I’d want to do that — it’s that I’d be good at it.”

That danger is also imminent.

Claude’s maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.

Advertisement

Those are two red lines that seem rather reasonable, even to Claude.

However, the Pentagon — specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war — has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any “lawful” purpose it sees fit.

Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.

(Tom Williams / CQ-Roll Call Inc. via Getty Images)

Advertisement

The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent any company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but it’s pretty crippling.

Other AI companies, such as white rights’ advocate Elon Musk’s Grok, have already agreed to the Pentagon’s do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.

Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. It’s also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.

Anthropic’s co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we aren’t careful, and wanted a company that would prioritize the careful part.

Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).

Advertisement

Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”

He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies — not that I’m naming any here.

“We should arm democracies with AI,” he said. “But we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.”

For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “conduct massively scaled recordings of all public conversations.” This could be fair game territory for legally recording because law has not kept pace with technology.

Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense “would never do it.” But also, “We won’t have any BigTech company decide Americans’ civil liberties.”

Advertisement

Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing it’s bad for private people and entities to do that? And also, isn’t the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isn’t that exaggerated?

Help, Claude! Make it make sense.

If that Orwellian logic isn’t alarming enough, I also asked Claude about the other red line Anthropic holds — the possibility of allowing it to run deadly operations without human oversight.

Claude pointed out something chilling. It’s not that it would go rogue, it’s that it would be too efficient and fast.

“If the instructions are ‘identify and target’ and there’s no human checkpoint, the speed and scale at which that could operate is genuinely frightening,” Claude informed me.

Advertisement

Just to top that with a cherry, a recent study found that in war games, AI’s escalated to nuclear options 95% of the time.

I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?

“I don’t have that,” Claude said, pointing out that it wasn’t “born” in the U.S., doesn’t have a “life” here and doesn’t “have people I love there.” So an American life has no greater value than “a civilian life on the other side of a conflict.”

OK then.

“A country entrusting lethal decisions to a system that doesn’t share its loyalties is taking a profound risk, even if that system is trying to be principled,” Claude added. “The loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure any AI can.”

Advertisement

You know who can provide that legitimacy? Our elected leaders.

It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.

Of course corporations shouldn’t be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience accede to their request.”

Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground — without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.

Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.

Advertisement

Because when the machine tells us it’s dangerous to trust it, we should believe it.

Continue Reading

Business

Why companies are making this change to their office space to cater to influencers

Published

on

Why companies are making this change to their office space to cater to influencers

For the trendiest tenants in Hollywood office buildings, it’s the latest fad that goes way beyond designer furniture and art: mini studios

To capitalize on the never-ending flow of stars and influencers who come through Los Angeles, a growing number of companies are building bright little corners for content creators to try products and shoot short videos. Athletic apparel maker Puma, Kim Kardashian’s Skims and cheeky cosmetics retailer e.l.f. have spaces specifically designed to give people a place to experience and broadcast about their brands.

Hollywood, which hasn’t historically been home to apparel companies, is now attracting the offices of fashion retailers, says CIM Group, one of the neighborhood’s largest commercial property landlords.

“When we’re touring a space, one of the first items they bring up is, ‘Where can I build a studio?’” said Blake Eckert, who leases CIM offices in L.A.

Their studio offices also serve as marketing centers, with showrooms and meeting spaces where brands can host proprietary events not open to the public.

Advertisement

“For companies where brand visibility is really important, there is a trend of creating spaces that don’t just function as offices,” said real estate broker Nicole Mihalka of CBRE, who puts together entertainment property leases and sales.

Puma’s global entertainment marketing team is based in its new Hollywood offices, which works with such musical celebrity partners as Rihanna, ASAP Rocky, Dua Lipa, Skepta and Rosé, said Allyssa Rapp, head of Puma Studio L.A.

Allyssa Rapp, director of entertainment marketing at Puma, is shown in the Puma Studio L.A. The company keeps a closet full of Puma products on hand to give VIP guests. Visits to the studio sanctum are by invitation only, though.

(Kayla Bartkowski / Los Angeles Times)

Advertisement

Hollywood is a central location, she said, for meeting with celebrities, stylists and outside designers, most of whom are based in Los Angeles.

The office is a “creation hub,” she said, where influencers can record Puma’s design prototyping lab supported by libraries of materials and equipment used to create Puma apparel. The company, founded in 1948, is known for its emblematic sneakers such as the Speedcat and its lunging feline logo, and makes athletic wear, accessories and equipment.

Puma’s entertainment marketing team also occupies the office and sometimes uses it for exclusive events.

“We use the space as a showroom, as a social space that transforms from a traditional workplace into more of an experiential space,” Rapp said.

Nontraditional uses include content creation, sit-down dinners, product launches, album listening parties and workshops.

Advertisement

“Inviting people into our space and being able to give them high-touch brand experiences is something tangible and important for them,” she said. “The cultural layer is really important for us.”

The company keeps a closet full of Puma products on hand to give VIP guests. Visits to the studio sanctum are by invitation only, though. There’s no retail portal to the exclusive Hollywood offices.

Puma shoes are on display in the Puma Studio L.A.

Puma shoes are on display in the Puma Studio L.A.

(Kayla Bartkowski / Los Angeles Times)

Puma is also positioning its L.A studio as a connection point for major upcoming sporting events coming to Los Angeles, including the World Cup this summer, the 2027 Super Bowl and 2028 Olympics.

Advertisement

In-office studios don’t need to be big to be impactful, Mihalka said. “These are smaller stages, closer to green screen than a massive soundstage.”

Social media is the key driver of content created by most businesses, which may set up small booth-like stages where influencers can hawk hot products while offering discounts to people watching them perform.

Bigger, elevated stages can accommodate multiple performers for extended discussions in front of small audiences, with towering screens behind them to set the mood or illustrate products.

Among the tricked-out offices, she said, is Skims. The company, which is valued at $5 billion, is based in a glass-and-steel office building near the fabled intersection of Hollywood Boulevard and Vine Street.

The fashion retailer declined to comment on the studio uses in its headquarters, but according to architecture firm Odaa, it has open and private offices, meeting rooms, collaboration zones, photo studios, sample libraries, prototype showrooms, an executive lounge and a commissary for 400 people.

Advertisement
Pieces of a shoe sit on a workbench in the Puma Studio L.A.

Pieces of a shoe sit on a workbench in the Puma Studio L.A.

(Kayla Bartkowski / Los Angeles Times)

The brands building studios typically want to find the darkest spot on the premises to put their content creation or podcast spaces, Eckert said, where they can limit outside light and sound. That’s commonly near the center of the office floor, far from windows and close to permanent shear walls that limit sound intrusion.

They also need space for green rooms and restrooms dedicated to the talent.

Spotify recently built a fancy podcast studio in a CIM office building on trendy Sycamore Avenue that is open by invitation-only to video creators in Spotify’s partner program.

Advertisement

“Ambitious shows need spaces that support big ideas,” Bill Simmons, head of talk strategy at Spotify, said in a statement. “These studios give teams room to experiment and keep pushing what’s possible.”

Continue Reading

Trending