Connect with us

Technology

AI agents are science fiction not yet ready for primetime

Published

on

AI agents are science fiction not yet ready for primetime

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on all things AI, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

It all started with J.A.R.V.I.S. Yes, that J.A.R.V.I.S. The one from the Marvel movies.

Well, maybe it didn’t start with Iron Man’s AI assistant, but the fictional system definitely helped the concept of an AI agent along. Whenever I’ve interviewed AI industry folks about agentic AI, they often point to J.A.R.V.I.S. as an example of the ideal AI tool in many ways — one that knows what you need done before you even ask, can analyze and find insights in large swaths of data, and can offer strategic advice or run point on certain aspects of your business. People sometimes disagree on the exact definition of an AI agent, but at its core, it’s a step beyond chatbots in that it’s a system that can perform multistep, complex tasks on your behalf without constantly needing back-and-forth communication with you. It essentially makes its own to-do list of subtasks it needs to complete in order to get to your preferred end goal. That fantasy is closer to being a reality in many ways, but when it comes to actual usefulness for the everyday user, there are a lot of things that don’t work — and maybe will never work.

The term “AI agent” has been around for a long time, but it especially started trending in the tech industry in 2023. That was the year of the concept of AI agents; the term was on everyone’s lips as people tried to suss out the idea and how to make it a reality, but you didn’t see many successful use cases. The next year, 2024, was the year of deployment — people were really putting the code out into the field and seeing what it could do. (The answer, at the time, was… not much. And filled with a bunch of error messages.)

I can pinpoint the hype around AI agents becoming widespread to one specific announcement: In February 2024, Klarna, a fintech company, said that after one month, its AI assistant (powered by OpenAI’s tech) had successfully done the work of 700 full-time customer service agents and automated two-thirds of the company’s customer service chats. For months, those statistics came up in almost every AI industry conversation I had.

Advertisement

The hype never died down, and in the following months, every Big Tech CEO seemed to harp on the term in every earnings call. Executives at Amazon, Meta, Google, Microsoft, and a whole host of other companies began to talk about their commitment to building useful and successful AI agents — and tried to put their money where their mouths are to make it happen.

The vision was that one day, an AI agent could do everything from book your travel to generate visuals for your business presentations. The ideal tool could even, say, find a good time and place to hang out with a bunch of your friends that works with all of your calendars, food preferences, and dietary restrictions — and then book the dinner reservation and create a calendar event for everyone.

Now let’s talk about the “AI coding” of it all: For years, AI coding has been carrying the agentic AI industry. If you asked anyone about real-life, successful, not-annoying use cases for AI agents happening right now and not conceptually in a not-too-distant future, they’d point to AI coding — and that was pretty much the only concrete thing they could point to. Many engineers use AI agents for coding, and they’re seen as objectively pretty good. Good enough, in fact, that at Microsoft and Google, up to 30 percent of the code is now being written by AI agents. And for startups like OpenAI and Anthropic, which burn through cash at high rates, one of their biggest revenue generators is AI coding tools for enterprise clients.

So until recently, AI coding has been the main real-life use case of AI agents, but obviously, that’s not pandering to the everyday consumer. The vision, remember, was always a jack-of-all-trades sort of AI agent for the “everyman.” And we’re not quite there yet — but in 2025, we’ve gotten closer than we’ve ever been before.

Last October, Anthropic kicked things off by introducing “Computer Use,” a tool that allowed Claude to use a computer like a human might — browsing, searching, accessing different platforms, and completing complex tasks on a user’s behalf. The general consensus was that the tool was a step forward for technology, but reviews said that in practice, it left a lot to be desired. Fast-forward to January 2025, and OpenAI released Operator, its version of the same thing, and billed it as a tool for filling out forms, ordering groceries, booking travel, and creating memes. Once again, in practice, many users agreed that the tool was buggy, slow, and not always efficient. But again, it was a significant step. The next month, OpenAI released Deep Research, an agentic AI tool that could compile long research reports on any topic for a user, and that spun things forward, too. Some people said the research reports were more impressive in length than content, but others were seriously impressed. And then in July, OpenAI combined Deep Research and Operator into one AI agent product: ChatGPT Agent. Was it better than most consumer-facing agentic AI tools that came before? Absolutely. Was it still tough to make work successfully in practice? Absolutely.

Advertisement

So there’s a long way to go to reach that vision of an ideal AI agent, but at the same time, we’re technically closer than we’ve ever been before. That’s why tech companies are putting more and more money into agentic AI, by way of investing in additional compute, research and development, or talent. Google recently hired Windsurf’s CEO, cofounder, and some R&D team members, specifically to help Google push its AI agent projects forward. And companies like Anthropic and OpenAI are racing each other up the ladder, rung by rung, to introduce incremental features to put these agents in the hands of consumers. (Anthropic, for instance, just announced a Chrome extension for Claude that allows it to work in your browser.)

So really, what happens next is that we’ll see AI coding continue to improve (and, unfortunately, potentially replace the jobs of many entry-level software engineers). We’ll also see the consumer-facing agent products improve, likely slowly but surely. And we’ll see agents used increasingly for enterprise and government applications, especially since Anthropic, OpenAI, and xAI have all debuted government-specific AI platforms in recent months.

Overall, expect to see more false starts, starts and stops, and mergers and acquisitions as the AI agent competition picks up (and the hype bubble continues to balloon). One question we’ll all have to ask ourselves as the months go on: What do we actually want a conceptual “AI agent” to be able to do for us? Do we want them to replace just the logistics or also the more personal, human aspects of life (i.e., helping write a wedding toast or a note for a flower delivery)? And how good are they at helping with the logistics vs. the personal stuff? (Answer for that last one: not very good at the moment.)

  • Besides the astronomical environmental cost of AI — especially for large models, which are the ones powering AI agent efforts — there’s an elephant in the room. And that’s the idea that “smarter AI that can do anything for you” isn’t always good, especially when people want to use it to do… bad things. Things like creating chemical, biological, radiological, and nuclear (CBRN) weapons. Top AI companies say they’re increasingly worried about the risks of that. (Of course, they’re not worried enough to stop building.)
  • Let’s talk about the regulation of it all. A lot of people have fears about the implications of AI, but many aren’t fully aware of the potential dangers posed by uber-helpful, aiming-to-please AI agents in the hands of bad actors, both stateside and abroad (think: “vibe-hacking,” romance scams, and more). AI companies say they’re ahead of the risk with the voluntary safeguards they’ve implemented. But many others say this may be a case for an external gut-check.

0 Comments

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Google’s annual revenue tops $400 billion for the first time

Published

on

Google’s annual revenue tops 0 billion for the first time

Google’s parent company, Alphabet, has earned more than $400 billion in annual revenue for the first time. The company announced the milestone as part of its Q4 2025 earnings report released on Wednesday, which highlights the 15 percent year-over-year increase as its cloud business and YouTube continue to grow.

As noted in the earnings report, Google’s Cloud business reached a $70 billion run rate in 2025, while YouTube’s annual revenue soared beyond $60 billion across ads and subscriptions. Alphabet CEO Sundar Pichai told investors that YouTube remains the “number one streamer,” citing data from Nielsen. The company also now has more than 325 million paid subscribers, led by Google One and YouTube Premium.

Additionally, Pichai noted that Google Search saw more usage over the past few months “than ever before,” adding that daily AI Mode queries have doubled since launch. Google will soon take advantage of the popularity of its Gemini app and AI Mode, as it plans to build an agentic checkout feature into both tools.

Continue Reading

Technology

Waymo under federal investigation after child struck

Published

on

Waymo under federal investigation after child struck

NEWYou can now listen to Fox News articles!

Federal safety regulators are once again taking a hard look at self-driving cars after a serious incident involving Waymo, the autonomous vehicle company owned by Alphabet.

This time, the investigation centers on a Waymo vehicle that struck a child near an elementary school in Santa Monica, California, during morning drop-off hours. The crash happened Jan. 23 and raised immediate questions about how autonomous vehicles behave around children, school zones and unpredictable pedestrian movement.

On Jan. 29, the National Highway Traffic Safety Administration confirmed it had opened a new preliminary investigation into Waymo’s automated driving system.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

TESLA’S SELF-DRIVING CARS UNDER FIRE AGAIN

Waymo operates Level 4 self-driving vehicles in select U.S. cities, where the car controls all driving tasks without a human behind the wheel. (AP Photo/Terry Chea, File)

What happened near the Santa Monica school?

According to documents posted by NHTSA, the crash occurred within two blocks of an elementary school during normal drop-off hours. The area was busy. There were multiple children present, a crossing guard on duty and several vehicles double-parked along the street.

Investigators say the child ran into the roadway from behind a double-parked SUV while heading toward the school. The Waymo vehicle struck the child, who suffered minor injuries. No safety operator was inside the vehicle at the time.

NHTSA’s Office of Defects Investigation is now examining whether the autonomous system exercised appropriate caution given its proximity to a school zone and the presence of young pedestrians.

Advertisement

AI TRUCK SYSTEM MATCHES TOP HUMAN DRIVERS IN MASSIVE SAFETY SHOWDOWN WITH PERFECT SCORES

Federal investigators are now examining whether Waymo’s automated system exercised enough caution near a school zone during morning drop-off hours. (Waymo)

Why federal investigators stepped in

The NHTSA says the investigation will focus on how Waymo’s automated driving system is designed to behave in and around school zones, especially during peak pickup and drop-off times.

That includes whether the vehicle followed posted speed limits, how it responded to visual cues like crossing guards and parked vehicles and whether its post-crash response met federal safety expectations. The agency is also reviewing how Waymo handled the incident after it occurred.

Waymo said it voluntarily contacted regulators the same day as the crash and plans to cooperate fully with the investigation. In a statement, the company said it remains committed to improving road safety for riders and everyone sharing the road.

Advertisement

Waymo responds to the federal investigation

We reached out to Waymo for comment, and the company provided the following statement:

“At Waymo, we are committed to improving road safety, both for our riders and all those with whom we share the road. Part of that commitment is being transparent when incidents occur, which is why we are sharing details regarding an event in Santa Monica, California, on Friday, January 23, where one of our vehicles made contact with a young pedestrian. Following the event, we voluntarily contacted the National Highway Traffic Safety Administration (NHTSA) that same day. NHTSA has indicated to us that they intend to open an investigation into this incident, and we will cooperate fully with them throughout the process. 

“The event occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into our vehicle’s path. Our technology immediately detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph before contact was made. 

“To put this in perspective, our peer-reviewed model shows that a fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph. This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver.

“Following contact, the pedestrian stood up immediately, walked to the sidewalk and we called 911. The vehicle remained stopped, moved to the side of the road and stayed there until law enforcement cleared the vehicle to leave the scene. 

Advertisement

This event demonstrates the critical value of our safety systems. We remain committed to improving road safety where we operate as we continue on our mission to be the world’s most trusted driver.”

Understanding Waymo’s autonomy level

Waymo vehicles fall under Level 4 autonomy on NHTSA’s six-level scale.

At Level 4, the vehicle handles all driving tasks within specific service areas. A human driver is not required to intervene, and no safety operator needs to be present inside the car. However, these systems do not operate everywhere and are currently limited to ride-hailing services in select cities.

The NHTSA has been clear that Level 4 vehicles are not available for consumer purchase, even though passengers may ride inside them.

This is not Waymo’s first federal probe

This latest investigation follows a previous NHTSA evaluation that opened in May 2024. That earlier probe examined reports of Waymo vehicles colliding with stationary objects like gates, chains and parked cars. Regulators also reviewed incidents in which the vehicles appeared to disobey traffic control devices.

Advertisement

That investigation was closed in July 2025 after regulators reviewed the data and Waymo’s responses. Safety advocates say the new incident highlights unresolved concerns.

UBER UNVEILS A NEW ROBOTAXI WITH NO DRIVER BEHIND THE WHEEL

No safety operator was inside the vehicle at the time of the crash, raising fresh questions about how autonomous cars handle unpredictable situations involving children. (Waymo)

What this means for you

If you live in a city where self-driving cars operate, this investigation matters more than it might seem. School zones are already high-risk areas, even for attentive human drivers. Autonomous vehicles must be able to detect unpredictable behavior, anticipate sudden movement and respond instantly when children are present.

This case will likely influence how regulators set expectations for autonomous driving systems near schools, playgrounds and other areas with vulnerable pedestrians. It could also shape future rules around local oversight, data reporting and operational limits for self-driving fleets.

Advertisement

For parents, commuters and riders, the outcome may affect where and when autonomous vehicles are allowed to operate.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP    

Kurt’s key takeaways

Self-driving technology promises safer roads, fewer crashes and less human error. But moments like this remind us that the hardest driving scenarios often involve human unpredictability, especially when children are involved. Federal investigators now face a crucial question: Did the system act as cautiously as it should have in one of the most sensitive driving environments possible? How they answer that question could help define the next phase of autonomous vehicle regulation in the United States.

Advertisement

Do you feel comfortable sharing the road with self-driving cars near schools, or is that a line technology should not cross yet? Let us know by writing to us at Cyberguy.com

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.

Continue Reading

Technology

Adobe actually won’t discontinue Animate

Published

on

Adobe actually won’t discontinue Animate

Adobe is no longer planning to discontinue Adobe Animate on March 1st. In an FAQ, the company now says that Animate will now be in maintenance mode and that it has “no plans to discontinue or remove access” to the app. Animate will still receive “ongoing security and bug fixes” and will still be available for “both new and existing users,” but it won’t get new features.

An announcement email that went out to Adobe Animate customers about the discontinuation did “not meet our standards and caused a lot of confusion and angst within the community,” according to a Reddit post from Adobe community team member Mike Chambers.

Animate will be available in maintenance mode “indefinitely” to “individual, small business, and enterprise customers,” according to Adobe. Before the change, Adobe said that non-enterprise customers could access Animate and download content until March 1st, 2027, while enterprise customers had until March 1st, 2029.

Continue Reading

Trending