Connect with us

Arizona

Is 49ers Star George Kittle Playing vs Cardinals?

Published

on

Is 49ers Star George Kittle Playing vs Cardinals?


GLENDALE — San Francisco 49ers TE George Kittle will play in Week 18’s regular season finale against the Arizona Cardinals.

Kittle was questionable with hamstring/ankle injuries entering this week. He’s played in 14 games for San Francisco thus far and has tallied 76 receptions for 1,079 yards and eight touchdowns, which was good enough for another Pro Bowl nod.

More on Kittle’s season from 49ers.com:

“In 14 games this season (all starts), he has registered 76 receptions for 1,079 yards (14.2 average) and eight touchdowns. Among NFL tight ends, Kittle ranks fifth in receptions, third in receiving yards, second in yards per reception and second in receiving touchdowns. His eight receiving touchdowns are the most among NFC tight ends. He has also registered four games with 100-or-more receiving yards this season, the most by any tight end in the NFL. Kittle’s 1,079 receiving yards mark his fourth career and second-consecutive 1,000-yard season. His four career 1,000-yard seasons are the third-most in franchise history and tied for the second-most by a tight end in NFL history.”

Advertisement

The 49ers’ season hasn’t exactly gone to plan, as injuries derailed the defending NFC champions to a 6-10 record with one game remaining. Their loss last week to the Detroit Lions secured their spot as fourth in the NFC West.

Similar sentiments could be carried for Arizona, as their 6-4 start quickly cooled off to a 7-9 record entering today. They’re locked into third in the division.

Kickoff between the Cardinals and 49ers is set for 2:25 PM local time here at State Farm Stadium.



Source link

Advertisement

Arizona

Arizona’s Tommy Lloyd keeping mum as UNC rumors swirl: ‘Nothing is distracting me’

Published

on

Arizona’s Tommy Lloyd keeping mum as UNC rumors swirl: ‘Nothing is distracting me’


INDIANAPOLIS — Give Tommy Lloyd credit. The Arizona coach isn’t budging despite rumors he could leave the Wildcats for the vacant North Carolina job.

All along, Lloyd has said his only focus is on leading top-seeded Arizona to a national championship, offering no hints about his future plans.

That didn’t change Thursday.

“Listen, I’ve got my full focus on this team. Nothing is distracting me,” Lloyd said. “That’s just how I’ve decided to approach it.“I’m a simple guy. I am kind of just one thing at a time. I’m not a multitasker. You can ask my wife. I’m 100 percent locked in on Arizona basketball right now, and I’m excited to see what this team can do.”

Advertisement

Arizona is back in the Final Four for the first time in 25 years. Lloyd, the former Gonzaga assistant coach, has led the Wildcats to a 145-38 record in five seasons.

Lloyd drew headlines last weekend after Arizona won the West Region, saying, “Arizona is going to have another good coach after me. I promise you.”

Pressed on the matter earlier this week, Lloyd became somewhat combative.

“You might call them ‘distractions,’ but it’s because you’re distracted,” he told reporters. “That doesn’t mean I’m distracted or we’re distracted.”

Lloyd has yet to say he’s not interested in the North Carolina job or that he will return to Arizona.

Advertisement
Arizona head coach Tommy Lloyd talks to the media at Lucas Oil Stadium on April 2, 2026 in Indianapolis. Robert Deutsch-Imagn Images

Michigan point guard Elliot Cadeau was taken to a hospital Wednesday before the Wolverines left for the Final Four after suffering an allergic reaction from accidental nut exposure.

The junior was with the team Thursday, expected to practice later and play Saturday against Arizona in a matchup of No. 1 seeds. He called it “minor,” not nearly as bad as a similar allergic reaction he had as a kid.

“Very unfortunate for him to have to go through that. If it’s the worst thing that happens to us, then we’re very blessed,” Michigan coach Dusty May said.

The West Orange, N.J., native is averaging 10.2 points and 5.8 assists for Michigan.

Advertisement



Source link

Continue Reading

Arizona

Arizona girl who vanished 32 years ago has been found alive, sheriff says

Published

on

Arizona girl who vanished 32 years ago has been found alive, sheriff says


An Arizona girl who vanished in 1994 has been found alive, the Gila County sheriff said Wednesday. 

Christina Marie Plante disappeared from Star Valley, Arizona when she was 13 years old, the Gila County Sheriff’s Office said. She was last seen on May 19, 1994, around 12:30 p.m., after leaving home on foot to go to a stable where her horse was kept, according to a missing persons poster. She was last seen wearing shorts, a t-shirt and tennis shoes, and was considered “missing/endangered and under suspicious circumstances,” according to the sheriff’s office. 

Sheriff Adam J. Shepherd said in a news release that the girl was reported missing at the time, and “extensive search efforts” involving local and regional resources were conducted. Plante was listed in national missing children databases, and missing persons posters were distributed around the region, state and country. 

“Despite exhaustive ground searches, interviews and investigative follow-up, no viable leads were developed” at the time of her disappearance, Shepherd said,and the case remained open.Over the decades, investigators re-examined evidence and pursued any new information that became available, he said. 

Advertisement

The sheriff’s office eventually established a cold case unit, which focused on unresolved investigations, Shepherd said.  Detectives in the unit used “advances in technology, modern investigative techniques and detailed case review” to develop new leads that “ultimately led to a breakthrough,” Shepherd said. 

A missing persons poster for Christina Marie Plante.

Gila County Sheriff’s Office


Shepherd did not say where Plante was found, or share any circumstances of her disappearance “out of respect for Christina’s privacy and well-being.” Shepherd said that investigators have confirmed her identity, and that her status as a missing person “has been officially resolved.” 

Advertisement

Shepherd said that the case “underscores the importance of cold case review initiatives and the impact of evolving technology in bringing long-awaitd answers to families and communities,” and said the sheriff’s office “remains committed to pursing all unresolved cases.” 



Source link

Continue Reading

Arizona

Arizona State University researcher warns against overtrusting AI in Iran strikes

Published

on

Arizona State University researcher warns against overtrusting AI in Iran strikes


PHOENIX (AZFamily) — The U.S. military’s AI-powered battlefield intelligence system can compress targeting decisions that once took days into minutes or seconds. But in that push for speed, a preliminary inquiry by the Pentagon found the U.S. relied on outdated intelligence and struck an Iranian school, killing about 170 people, mostly children.

It turns out there’s a lot of research on what happens when humans deploy AI in battlefield settings and why things can go wrong.

“AI is not ready for prime time,” said Nancy Cooke, director of ASU’s Center for Human, AI, and Robot Teaming, on the latest episode of Generation AI. “It is unreliable. It can do unexpected things. And humans may have the tendency to overtrust it.”

Cooke has spent years studying what happens when humans team up with artificial intelligence in high-stakes scenarios. In her research on simulated drone pilot teams, she’s watched AI perform its assigned tasks flawlessly while simultaneously making the humans perform worse.

Advertisement

AI-powered tools like the Maven Smart System, the Pentagon’s battlefield intelligence platform that identifies and prioritizes targets, create a risk for over-reliance on AI recommendations, she said.

Large language models appear deceptively human-like, Cooke explained, but “they’re very much not like human intelligence, although people may think so and then overtrust them as a result.”

Three-person drone experiment

Cooke’s research team created simulated three-person drone teams, then substituted AI for one human pilot. The AI executed its core functions without error, controlling airspeed, heading and altitude.

But something unexpected happened.

“[The AI pilot] acted like there was no one else on the team,” Cooke said. “It did not anticipate the information needs of its fellow team members. And as a result, the coordination of the whole team broke down.”

Advertisement

The humans changed their behavior, too. Thinking they were working with a superior AI, the research subjects decided to follow the machine’s lead. “AI isn’t anticipating information needs. So, I’m going to stop doing that too,” seemed to be their subconscious logic.

The result: teams with AI got reconnaissance photos slower than all-human teams, despite AI’s superior individual performance.

“Even though AI may be fast, the combination of AI working with humans may be slow and bad,” Cooke said.

“It Shouldn’t Be Trusted”

Both over-reliance and under-trust of AI pose challenges on the battlefield, but Cooke is convinced one error is more serious.

“Definitely over-trusting is worse. Because it shouldn’t be trusted. It’s going to give you bad information a lot of the time. Not all of the time. And it’s going to be fast, but that’s not necessarily better,” she said.

Advertisement

The Maven Smart System represents exactly what worries her most. The Pentagon has praised the system for combining eight or nine different intelligence systems into one, condensing targeting decisions from days or hours into minutes.

“So many things can go wrong,” Cooke said. “You have all these different system components that haven’t been tested. They have no safeguards on them. We don’t know how they play off of each other and work together. It’s just a recipe for disaster.”

The Anthropic precedent

Some AI companies are drawing their own red lines. The Pentagon labeled Anthropic a supply chain risk in March after the company refused to grant the military a license to use its products for “any lawful purpose,” without restrictions for domestic mass surveillance or autonomous lethal weaponry.

Anthropic CEO Dario Amodei said he objected, in part, because he did not believe the company’s models could reliably handle such grave tasks.

“Anthropic was spot on. They’re not ready,” Cooke said. “And I don’t know that they’re going to be ready in a very long time.”

Advertisement

Her position goes further than timing concerns. Some decisions, she argues, should remain exclusively human: “decisions to target something, decisions to shoot.”

Information overload

Cooke’s wildfire research reveals another dimension of the challenge of partnering humans with AI. Drones can collect vast amounts of reconnaissance data, but processing it remains “a complex cognitive task to go over reels and reels of video.”

Her research found that too much information creates its own problems, leading to decision paralysis and worse outcomes; the opposite of what AI integration promises to deliver.

The pattern holds across domains: AI excels at narrow technical tasks but struggles with the contextual awareness and anticipation that effective teamwork requires, she said.

“I think you have to make sure that people realize that this is not human intelligence and humans have a lot to offer,” Cooke said. “The best combination would be good human intelligence coupled with good technology.”

Advertisement

The escalation question

Critics argue that moral qualms about autonomous weapons put the U.S. at a disadvantage against adversaries like China or Russia, who might deploy fully autonomous systems.

They worry about next-generation weapons that can decide to fire on their own. In a world where milliseconds might be the difference between life and death, these critics argue human-in-the-loop weapons won’t be able to keep up.

Cooke sees it differently: she thinks autonomous systems run the risk of friendly fire and may be vulnerable to foreign hacking, turning advanced weapons into threats against their own operators.

More broadly, she views the AI arms race as inherently escalatory, potentially raising the risk of countries opting for a weapon of last resort: a nuclear bomb. “People are pushing to, you know, move fast and break things. And indeed, we will.”

See a spelling or grammatical error in our story? Please click here to report it.

Advertisement

Do you have a photo or video of a breaking news story? Send it to us here with a brief description.

Copyright 2026 KTVK/KPHO. All rights reserved.



Source link

Continue Reading

Trending