Connect with us

Technology

AI agents are science fiction not yet ready for primetime

Published

on

AI agents are science fiction not yet ready for primetime

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on all things AI, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

It all started with J.A.R.V.I.S. Yes, that J.A.R.V.I.S. The one from the Marvel movies.

Well, maybe it didn’t start with Iron Man’s AI assistant, but the fictional system definitely helped the concept of an AI agent along. Whenever I’ve interviewed AI industry folks about agentic AI, they often point to J.A.R.V.I.S. as an example of the ideal AI tool in many ways — one that knows what you need done before you even ask, can analyze and find insights in large swaths of data, and can offer strategic advice or run point on certain aspects of your business. People sometimes disagree on the exact definition of an AI agent, but at its core, it’s a step beyond chatbots in that it’s a system that can perform multistep, complex tasks on your behalf without constantly needing back-and-forth communication with you. It essentially makes its own to-do list of subtasks it needs to complete in order to get to your preferred end goal. That fantasy is closer to being a reality in many ways, but when it comes to actual usefulness for the everyday user, there are a lot of things that don’t work — and maybe will never work.

The term “AI agent” has been around for a long time, but it especially started trending in the tech industry in 2023. That was the year of the concept of AI agents; the term was on everyone’s lips as people tried to suss out the idea and how to make it a reality, but you didn’t see many successful use cases. The next year, 2024, was the year of deployment — people were really putting the code out into the field and seeing what it could do. (The answer, at the time, was… not much. And filled with a bunch of error messages.)

I can pinpoint the hype around AI agents becoming widespread to one specific announcement: In February 2024, Klarna, a fintech company, said that after one month, its AI assistant (powered by OpenAI’s tech) had successfully done the work of 700 full-time customer service agents and automated two-thirds of the company’s customer service chats. For months, those statistics came up in almost every AI industry conversation I had.

Advertisement

The hype never died down, and in the following months, every Big Tech CEO seemed to harp on the term in every earnings call. Executives at Amazon, Meta, Google, Microsoft, and a whole host of other companies began to talk about their commitment to building useful and successful AI agents — and tried to put their money where their mouths are to make it happen.

The vision was that one day, an AI agent could do everything from book your travel to generate visuals for your business presentations. The ideal tool could even, say, find a good time and place to hang out with a bunch of your friends that works with all of your calendars, food preferences, and dietary restrictions — and then book the dinner reservation and create a calendar event for everyone.

Now let’s talk about the “AI coding” of it all: For years, AI coding has been carrying the agentic AI industry. If you asked anyone about real-life, successful, not-annoying use cases for AI agents happening right now and not conceptually in a not-too-distant future, they’d point to AI coding — and that was pretty much the only concrete thing they could point to. Many engineers use AI agents for coding, and they’re seen as objectively pretty good. Good enough, in fact, that at Microsoft and Google, up to 30 percent of the code is now being written by AI agents. And for startups like OpenAI and Anthropic, which burn through cash at high rates, one of their biggest revenue generators is AI coding tools for enterprise clients.

So until recently, AI coding has been the main real-life use case of AI agents, but obviously, that’s not pandering to the everyday consumer. The vision, remember, was always a jack-of-all-trades sort of AI agent for the “everyman.” And we’re not quite there yet — but in 2025, we’ve gotten closer than we’ve ever been before.

Last October, Anthropic kicked things off by introducing “Computer Use,” a tool that allowed Claude to use a computer like a human might — browsing, searching, accessing different platforms, and completing complex tasks on a user’s behalf. The general consensus was that the tool was a step forward for technology, but reviews said that in practice, it left a lot to be desired. Fast-forward to January 2025, and OpenAI released Operator, its version of the same thing, and billed it as a tool for filling out forms, ordering groceries, booking travel, and creating memes. Once again, in practice, many users agreed that the tool was buggy, slow, and not always efficient. But again, it was a significant step. The next month, OpenAI released Deep Research, an agentic AI tool that could compile long research reports on any topic for a user, and that spun things forward, too. Some people said the research reports were more impressive in length than content, but others were seriously impressed. And then in July, OpenAI combined Deep Research and Operator into one AI agent product: ChatGPT Agent. Was it better than most consumer-facing agentic AI tools that came before? Absolutely. Was it still tough to make work successfully in practice? Absolutely.

Advertisement

So there’s a long way to go to reach that vision of an ideal AI agent, but at the same time, we’re technically closer than we’ve ever been before. That’s why tech companies are putting more and more money into agentic AI, by way of investing in additional compute, research and development, or talent. Google recently hired Windsurf’s CEO, cofounder, and some R&D team members, specifically to help Google push its AI agent projects forward. And companies like Anthropic and OpenAI are racing each other up the ladder, rung by rung, to introduce incremental features to put these agents in the hands of consumers. (Anthropic, for instance, just announced a Chrome extension for Claude that allows it to work in your browser.)

So really, what happens next is that we’ll see AI coding continue to improve (and, unfortunately, potentially replace the jobs of many entry-level software engineers). We’ll also see the consumer-facing agent products improve, likely slowly but surely. And we’ll see agents used increasingly for enterprise and government applications, especially since Anthropic, OpenAI, and xAI have all debuted government-specific AI platforms in recent months.

Overall, expect to see more false starts, starts and stops, and mergers and acquisitions as the AI agent competition picks up (and the hype bubble continues to balloon). One question we’ll all have to ask ourselves as the months go on: What do we actually want a conceptual “AI agent” to be able to do for us? Do we want them to replace just the logistics or also the more personal, human aspects of life (i.e., helping write a wedding toast or a note for a flower delivery)? And how good are they at helping with the logistics vs. the personal stuff? (Answer for that last one: not very good at the moment.)

  • Besides the astronomical environmental cost of AI — especially for large models, which are the ones powering AI agent efforts — there’s an elephant in the room. And that’s the idea that “smarter AI that can do anything for you” isn’t always good, especially when people want to use it to do… bad things. Things like creating chemical, biological, radiological, and nuclear (CBRN) weapons. Top AI companies say they’re increasingly worried about the risks of that. (Of course, they’re not worried enough to stop building.)
  • Let’s talk about the regulation of it all. A lot of people have fears about the implications of AI, but many aren’t fully aware of the potential dangers posed by uber-helpful, aiming-to-please AI agents in the hands of bad actors, both stateside and abroad (think: “vibe-hacking,” romance scams, and more). AI companies say they’re ahead of the risk with the voluntary safeguards they’ve implemented. But many others say this may be a case for an external gut-check.

0 Comments

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Boston Dynamics CEO Robert Playter is stepping down after six years

Published

on

Boston Dynamics CEO Robert Playter is stepping down after six years

Robert Playter, CEO of Boston Dynamics, announced on Tuesday that he is stepping down from his role effective immediately and leaving the company on February 27th, as previously reported by A3. Under Playter’s leadership, Boston Dynamics navigated its way through an acquisition from Softbank that brought it to Hyundai in 2021, and it launched a new all-electric version of its humanoid Atlas robot in 2024. Just a few days ago, the company posted another video of its research Atlas robots attempting tumbling passes and outdoor runs as more enterprise-ready editions start to roll out.

Boston Dynamics announced at CES last month that Atlas robots will begin working in Hyundai’s car plants starting in 2028, as the robotics field has become increasingly crowded by competitors like Tesla and Figure, as well as AI companies with “world model” tech built for robots.

Playter has been at Boston Dynamics for over 30 years and has served as CEO since 2020, replacing the company’s original CEO, Marc Raibert. Boston Dynamics CFO Amanda McMaster will serve as interim CEO while the company’s board of directors searches for Playter’s replacement.

“Boston Dynamics has been the ride of a lifetime. What this place has become has exceeded anything I could have ever imagined all those years ago in our funky lab in the basement of the MIT Media Lab,” Playter said in a letter to employees, which was shared with The Verge. He also highlighted the company’s successes with its Spot, Stretch, and Atlas robots.

“From the earliest days of hopping robots, to the world’s first quadrupeds, to spearheading the entire humanoid industry, Playter made his mark as a pioneer of innovation. He transformed Boston Dynamics from a small research and development lab into a successful business that now proudly calls itself the global leader in mobile robotics,” Nikolas Noel, VP of marketing and communications at Boston Dynamics, said in a statement to The Verge, adding, “He will be sorely missed, but we hope he enjoys some well-deserved time off. Thanks Rob.”

Advertisement
Continue Reading

Technology

Microsoft ‘Important Mail’ email is a scam: How to spot it

Published

on

Microsoft ‘Important Mail’ email is a scam: How to spot it

NEWYou can now listen to Fox News articles!

Scam emails are getting better at looking official. This one claims to be an urgent warning from Microsoft about your email account. It looks serious. It feels time sensitive. And that is exactly the point. Lily reached out after something about the message did not sit right.

“I need help with an email that I’m unsure is valid. Hoping you can help me determine whether this is a valid or a scam. I have attached two screenshots below. Thank you in advance,” Lily wrote.

Here is the important takeaway up front. This email is not from Microsoft. It is a scam designed to rush you into clicking a dangerous link.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

WHY CLICKING THE WRONG COPILOT LINK COULD PUT YOUR DATA AT RISK

A closer look at the sender shows a red flag scammers hope you will miss, a free email address posing as a trusted brand. (Kurt “CyberGuy” Knutsson)

Why this Microsoft ‘Important Mail’ email is a scam

Once you slow down and read it closely, the red flags pile up quickly.

A generic greeting

It opens with “Dear User.” Microsoft uses your name. Scammers avoid it because they do not know who you are.

A hard deadline meant to scare you

The message claims your email access will stop on Feb. 5, 2026. Scammers rely on fear and urgency to short-circuit good judgment.

Advertisement

A completely wrong sender address

The email came from accountsettinghelp20@aol.com. Microsoft does not send security notices from AOL. Ever.

Pushy link language

“PROCEED HERE” is designed to trigger a fast click. Microsoft messages sent to you to are clearly labeled Microsoft.com pages.

Fake legal language

Lines like “© 2026 All rights reserved” are often copied and pasted by scammers to look official.

Attachments that should not be there

Microsoft account alerts do not include image attachments. That alone is a major warning sign.

10 WAYS TO PROTECT SENIORS FROM EMAIL SCAMS

Advertisement

The fake Microsoft email uses urgency and vague language to pressure you into clicking before you have time to think. (Kurt “CyberGuy” Knutsson)

What would have happened if you clicked

If you clicked the link, you would almost certainly land on a fake Microsoft login page. From there, attackers aim to steal:

  • Your email address
  • Your password
  • Access to other accounts tied to that email

Once they have your email, they can reset passwords, dig through old messages and launch more scams using your identity.

HACKERS ABUSE GOOGLE CLOUD TO SEND TRUSTED PHISHING EMAILS

Scam emails often reach people on their phones, where small screens make it easier to miss warning signs and click fast. (Kurt “CyberGuy” Knutsson)

What to do if this email lands in your inbox

If an email like this shows up, slow down and follow these steps in order. Each one helps stop the scam cold.

Advertisement

1) Do not click or interact at all

Do not click links, buttons or images. Do not reply. Even opening attachments can trigger tracking or malware. Strong antivirus software can block phishing pages, scan attachments and warn you about dangerous links before damage happens. Make sure yours is active and up to date. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

2) Delete the message immediately

Once it is reported, delete it. There is no reason to keep it in your inbox or trash.

3) Check your account the safe way

If you want peace of mind, open a new browser window and go directly to the official Microsoft account website. Sign in normally. If there is a real issue, it will appear there.

4) Change your password if you clicked

If you clicked anything or entered information, change your Microsoft password right away. Use a strong, unique password you do not use anywhere else. A password manager can generate and store it securely for you. Then review recent sign-in activity for anything suspicious.

Advertisement

Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

5) Enable two-factor authentication

Turn on two-factor authentication (2FA) for your Microsoft account. This adds a second check, which can stop attackers even if they get your password.

6) Use a data removal service for long-term protection

Scammers often find targets through data broker sites. A data removal service helps reduce how much personal information is publicly available, which lowers your exposure to phishing in the first place.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Advertisement

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

7) Report it as spam or phishing 

Use your email app’s built-in reporting tool. This helps train filters and protects other users from seeing the same scam.

Extra protection tips for real Microsoft notices

When Microsoft actually needs your attention, the signs look very different.

  • Alerts appear inside your Microsoft account dashboard
  • Messages do not demand immediate action through random email links
  • Notices never come from free email services like AOL, Gmail or Yahoo

That contrast makes scams easier to spot once you know what to look for.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Kurt’s key takeaways

Scammers are counting on you being busy, distracted or worried about losing access to your email. That is why messages like this lean so hard on urgency. Your email sits at the center of your digital life, so attackers know a shutdown threat gets attention fast. The good news is that slowing down for even a few seconds changes everything. Lily did exactly the right thing by stopping and asking first. That single habit can prevent identity theft, account takeovers and a long, frustrating cleanup. Remember this rule. Emails that threaten shutdowns and demand immediate action are almost never legitimate. When something feels urgent, that is your cue to pause, verify on your own and never let an email rush you into a mistake.

Have you seen a fake Microsoft warning like this recently, or did it pretend to come from another brand you trust? Let us know your thoughts by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

ChatGPT’s cheapest options now show you ads

Published

on

ChatGPT’s cheapest options now show you ads

ChatGPT users may soon start seeing ads in their chats, as OpenAI announced on Monday that it’s officially beginning to test ads on its AI platform. They’ll appear as labeled “sponsored” links at the bottom of ChatGPT answers, but OpenAI says the ads “do not influence the answers ChatGPT gives you.”

Currently, ads will only show up for users on the free version of ChatGPT or the lowest-cost $8 per month Go plan. Users in the Plus, Pro, Business, Enterprise, and Education plans won’t see any ads, so anyone who wants to avoid them has to pay at least $20 per month for the Plus subscription. There is one loophole — OpenAI notes that users can “opt out of ads in the Free tier in exchange for fewer daily free messages.”

Users on the Go tier can’t opt out of seeing ads, but users on both the Free and Go plans can dismiss ads, share feedback on ads, turn off ad personalization, turn off the option for ads to be based on past chats, and delete their ad data. According to OpenAI, advertisers will only get data on “aggregated ad views and clicks,” not personalized data or content from users’ ChatGPT conversations.

Additionally, not all users and chats will be eligible for ads, including users under 18 and conversations on certain sensitive topics “like health, mental health or politics.” Even adult users on the chatbot’s Free and Go plans might not immediately start seeing ads, since the feature is still in testing.

Continue Reading

Trending