Connect with us

Arizona

Arizona State University researcher warns against overtrusting AI in Iran strikes

Published

on

Arizona State University researcher warns against overtrusting AI in Iran strikes


PHOENIX (AZFamily) — The U.S. military’s AI-powered battlefield intelligence system can compress targeting decisions that once took days into minutes or seconds. But in that push for speed, a preliminary inquiry by the Pentagon found the U.S. relied on outdated intelligence and struck an Iranian school, killing about 170 people, mostly children.

It turns out there’s a lot of research on what happens when humans deploy AI in battlefield settings and why things can go wrong.

“AI is not ready for prime time,” said Nancy Cooke, director of ASU’s Center for Human, AI, and Robot Teaming, on the latest episode of Generation AI. “It is unreliable. It can do unexpected things. And humans may have the tendency to overtrust it.”

Cooke has spent years studying what happens when humans team up with artificial intelligence in high-stakes scenarios. In her research on simulated drone pilot teams, she’s watched AI perform its assigned tasks flawlessly while simultaneously making the humans perform worse.

Advertisement

AI-powered tools like the Maven Smart System, the Pentagon’s battlefield intelligence platform that identifies and prioritizes targets, create a risk for over-reliance on AI recommendations, she said.

Large language models appear deceptively human-like, Cooke explained, but “they’re very much not like human intelligence, although people may think so and then overtrust them as a result.”

Three-person drone experiment

Cooke’s research team created simulated three-person drone teams, then substituted AI for one human pilot. The AI executed its core functions without error, controlling airspeed, heading and altitude.

But something unexpected happened.

“[The AI pilot] acted like there was no one else on the team,” Cooke said. “It did not anticipate the information needs of its fellow team members. And as a result, the coordination of the whole team broke down.”

Advertisement

The humans changed their behavior, too. Thinking they were working with a superior AI, the research subjects decided to follow the machine’s lead. “AI isn’t anticipating information needs. So, I’m going to stop doing that too,” seemed to be their subconscious logic.

The result: teams with AI got reconnaissance photos slower than all-human teams, despite AI’s superior individual performance.

“Even though AI may be fast, the combination of AI working with humans may be slow and bad,” Cooke said.

“It Shouldn’t Be Trusted”

Both over-reliance and under-trust of AI pose challenges on the battlefield, but Cooke is convinced one error is more serious.

“Definitely over-trusting is worse. Because it shouldn’t be trusted. It’s going to give you bad information a lot of the time. Not all of the time. And it’s going to be fast, but that’s not necessarily better,” she said.

Advertisement

The Maven Smart System represents exactly what worries her most. The Pentagon has praised the system for combining eight or nine different intelligence systems into one, condensing targeting decisions from days or hours into minutes.

“So many things can go wrong,” Cooke said. “You have all these different system components that haven’t been tested. They have no safeguards on them. We don’t know how they play off of each other and work together. It’s just a recipe for disaster.”

The Anthropic precedent

Some AI companies are drawing their own red lines. The Pentagon labeled Anthropic a supply chain risk in March after the company refused to grant the military a license to use its products for “any lawful purpose,” without restrictions for domestic mass surveillance or autonomous lethal weaponry.

Anthropic CEO Dario Amodei said he objected, in part, because he did not believe the company’s models could reliably handle such grave tasks.

“Anthropic was spot on. They’re not ready,” Cooke said. “And I don’t know that they’re going to be ready in a very long time.”

Advertisement

Her position goes further than timing concerns. Some decisions, she argues, should remain exclusively human: “decisions to target something, decisions to shoot.”

Information overload

Cooke’s wildfire research reveals another dimension of the challenge of partnering humans with AI. Drones can collect vast amounts of reconnaissance data, but processing it remains “a complex cognitive task to go over reels and reels of video.”

Her research found that too much information creates its own problems, leading to decision paralysis and worse outcomes; the opposite of what AI integration promises to deliver.

The pattern holds across domains: AI excels at narrow technical tasks but struggles with the contextual awareness and anticipation that effective teamwork requires, she said.

“I think you have to make sure that people realize that this is not human intelligence and humans have a lot to offer,” Cooke said. “The best combination would be good human intelligence coupled with good technology.”

Advertisement

The escalation question

Critics argue that moral qualms about autonomous weapons put the U.S. at a disadvantage against adversaries like China or Russia, who might deploy fully autonomous systems.

They worry about next-generation weapons that can decide to fire on their own. In a world where milliseconds might be the difference between life and death, these critics argue human-in-the-loop weapons won’t be able to keep up.

Cooke sees it differently: she thinks autonomous systems run the risk of friendly fire and may be vulnerable to foreign hacking, turning advanced weapons into threats against their own operators.

More broadly, she views the AI arms race as inherently escalatory, potentially raising the risk of countries opting for a weapon of last resort: a nuclear bomb. “People are pushing to, you know, move fast and break things. And indeed, we will.”

See a spelling or grammatical error in our story? Please click here to report it.

Advertisement

Do you have a photo or video of a breaking news story? Send it to us here with a brief description.

Copyright 2026 KTVK/KPHO. All rights reserved.



Source link

Arizona

Where to watch Arizona Diamondbacks vs. New York Mets: Live stream, start time, TV channel, odds for Thursday, April 9

Published

on

Where to watch Arizona Diamondbacks vs. New York Mets: Live stream, start time, TV channel, odds for Thursday, April 9


The Arizona Diamondbacks (6-6), tied for second in the NL West, face the New York Mets (7-5), tied for second in the NL East, with the Mets favored at -160 odds. The starting pitchers are Eduardo Rodriguez for Arizona (0.00 ERA), and Nolan McLean for New York, with a 2.61 ERA. The over/under is set at 7 runs.

How to Watch Arizona Diamondbacks vs. New York Mets

  • Time: 7:10 p.m. ET / 4:10 p.m. PT

  • Where: Citi Field, Flushing, Queens, NY

  • TV Channels: SNY, Dbacks.TV, MLB Network

Advertisement

Team records

  • Arizona Diamondbacks: 6-6 (tied for second in NL West)

  • New York Mets: 7-5 (tied for second in NL East)

Odds (via BetMGM)

  • Spread: New York Mets -1.5

  • Moneyline: New York Mets -160 (59.1%) / Arizona Diamondbacks +135 (40.9%)

Starting pitchers

Arizona Diamondbacks: Eduardo Rodriguez (0-0; ERA: 0.00; K: 8; WHIP: 0.92)

New York Mets: Nolan McLean (1-0; ERA: 2.61; K: 12; WHIP: 0.87)

Advertisement

Weather: 44°F at first pitch



Source link

Continue Reading

Arizona

Arizona law closes loophole for registered sex offenders

Published

on

Arizona law closes loophole for registered sex offenders


A new law is in effect in Arizona, tightening name-change rules for sex offenders. Those trying to change their name must now disclose their status, in a move to keep victims better informed and to keep the community safer. FOX 10’s Megan Spector learns more about the law closing the loophole. 



Source link

Continue Reading

Arizona

Arizona teen who vanished in 1994 resurfaces decades later as mom of 3 who works for private investigator

Published

on

Arizona teen who vanished in 1994 resurfaces decades later as mom of 3 who works for private investigator


A runaway Arizona schoolgirl last seen 32 years ago is reportedly living as a married mom of three who works for a private investigator.

Christina Plante was 13 when she disappeared from her parents’ house in Star Valley, northeast of Phoenix, one Sunday afternoon in May 1994.

Missing teen Christina Plante has been found living as a married mother of three. Facebook / Shawn Hollon
Christina Plante lives in Missouri with her husband, Shaun Hollon. Facebook / Shawn Hollon

Now 45, the former missing teen was discovered living in Springfield, Missouri, in a five-bedroom house she shares with her husband, Shaun Hollon, 49, the Daily Mail reported.

Since her identity was revealed, Plante has given very few details about the past three decades.

Advertisement

She reportedly married as a teen and had three sons before earning a psychology degree and getting a job with a private investigations firm.

The teen disappeared in 1994. Gila County Sheriff’s Office

“She isn’t being very cooperative with us. She wouldn’t say who she met with or how she even got out of town,” Gila County Sheriff’s Office Chief Deputy Jim Lahti told the Daily Mail.

“She did admit that she ran away. She didn’t want to be there,” he added.



Source link

Advertisement
Continue Reading

Trending