This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on all things AI, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.
Technology
AI agents are science fiction not yet ready for primetime
It all started with J.A.R.V.I.S. Yes, that J.A.R.V.I.S. The one from the Marvel movies.
Well, maybe it didn’t start with Iron Man’s AI assistant, but the fictional system definitely helped the concept of an AI agent along. Whenever I’ve interviewed AI industry folks about agentic AI, they often point to J.A.R.V.I.S. as an example of the ideal AI tool in many ways — one that knows what you need done before you even ask, can analyze and find insights in large swaths of data, and can offer strategic advice or run point on certain aspects of your business. People sometimes disagree on the exact definition of an AI agent, but at its core, it’s a step beyond chatbots in that it’s a system that can perform multistep, complex tasks on your behalf without constantly needing back-and-forth communication with you. It essentially makes its own to-do list of subtasks it needs to complete in order to get to your preferred end goal. That fantasy is closer to being a reality in many ways, but when it comes to actual usefulness for the everyday user, there are a lot of things that don’t work — and maybe will never work.
The term “AI agent” has been around for a long time, but it especially started trending in the tech industry in 2023. That was the year of the concept of AI agents; the term was on everyone’s lips as people tried to suss out the idea and how to make it a reality, but you didn’t see many successful use cases. The next year, 2024, was the year of deployment — people were really putting the code out into the field and seeing what it could do. (The answer, at the time, was… not much. And filled with a bunch of error messages.)
I can pinpoint the hype around AI agents becoming widespread to one specific announcement: In February 2024, Klarna, a fintech company, said that after one month, its AI assistant (powered by OpenAI’s tech) had successfully done the work of 700 full-time customer service agents and automated two-thirds of the company’s customer service chats. For months, those statistics came up in almost every AI industry conversation I had.
The hype never died down, and in the following months, every Big Tech CEO seemed to harp on the term in every earnings call. Executives at Amazon, Meta, Google, Microsoft, and a whole host of other companies began to talk about their commitment to building useful and successful AI agents — and tried to put their money where their mouths are to make it happen.
The vision was that one day, an AI agent could do everything from book your travel to generate visuals for your business presentations. The ideal tool could even, say, find a good time and place to hang out with a bunch of your friends that works with all of your calendars, food preferences, and dietary restrictions — and then book the dinner reservation and create a calendar event for everyone.
Now let’s talk about the “AI coding” of it all: For years, AI coding has been carrying the agentic AI industry. If you asked anyone about real-life, successful, not-annoying use cases for AI agents happening right now and not conceptually in a not-too-distant future, they’d point to AI coding — and that was pretty much the only concrete thing they could point to. Many engineers use AI agents for coding, and they’re seen as objectively pretty good. Good enough, in fact, that at Microsoft and Google, up to 30 percent of the code is now being written by AI agents. And for startups like OpenAI and Anthropic, which burn through cash at high rates, one of their biggest revenue generators is AI coding tools for enterprise clients.
So until recently, AI coding has been the main real-life use case of AI agents, but obviously, that’s not pandering to the everyday consumer. The vision, remember, was always a jack-of-all-trades sort of AI agent for the “everyman.” And we’re not quite there yet — but in 2025, we’ve gotten closer than we’ve ever been before.
Last October, Anthropic kicked things off by introducing “Computer Use,” a tool that allowed Claude to use a computer like a human might — browsing, searching, accessing different platforms, and completing complex tasks on a user’s behalf. The general consensus was that the tool was a step forward for technology, but reviews said that in practice, it left a lot to be desired. Fast-forward to January 2025, and OpenAI released Operator, its version of the same thing, and billed it as a tool for filling out forms, ordering groceries, booking travel, and creating memes. Once again, in practice, many users agreed that the tool was buggy, slow, and not always efficient. But again, it was a significant step. The next month, OpenAI released Deep Research, an agentic AI tool that could compile long research reports on any topic for a user, and that spun things forward, too. Some people said the research reports were more impressive in length than content, but others were seriously impressed. And then in July, OpenAI combined Deep Research and Operator into one AI agent product: ChatGPT Agent. Was it better than most consumer-facing agentic AI tools that came before? Absolutely. Was it still tough to make work successfully in practice? Absolutely.
So there’s a long way to go to reach that vision of an ideal AI agent, but at the same time, we’re technically closer than we’ve ever been before. That’s why tech companies are putting more and more money into agentic AI, by way of investing in additional compute, research and development, or talent. Google recently hired Windsurf’s CEO, cofounder, and some R&D team members, specifically to help Google push its AI agent projects forward. And companies like Anthropic and OpenAI are racing each other up the ladder, rung by rung, to introduce incremental features to put these agents in the hands of consumers. (Anthropic, for instance, just announced a Chrome extension for Claude that allows it to work in your browser.)
So really, what happens next is that we’ll see AI coding continue to improve (and, unfortunately, potentially replace the jobs of many entry-level software engineers). We’ll also see the consumer-facing agent products improve, likely slowly but surely. And we’ll see agents used increasingly for enterprise and government applications, especially since Anthropic, OpenAI, and xAI have all debuted government-specific AI platforms in recent months.
Overall, expect to see more false starts, starts and stops, and mergers and acquisitions as the AI agent competition picks up (and the hype bubble continues to balloon). One question we’ll all have to ask ourselves as the months go on: What do we actually want a conceptual “AI agent” to be able to do for us? Do we want them to replace just the logistics or also the more personal, human aspects of life (i.e., helping write a wedding toast or a note for a flower delivery)? And how good are they at helping with the logistics vs. the personal stuff? (Answer for that last one: not very good at the moment.)
- Besides the astronomical environmental cost of AI — especially for large models, which are the ones powering AI agent efforts — there’s an elephant in the room. And that’s the idea that “smarter AI that can do anything for you” isn’t always good, especially when people want to use it to do… bad things. Things like creating chemical, biological, radiological, and nuclear (CBRN) weapons. Top AI companies say they’re increasingly worried about the risks of that. (Of course, they’re not worried enough to stop building.)
- Let’s talk about the regulation of it all. A lot of people have fears about the implications of AI, but many aren’t fully aware of the potential dangers posed by uber-helpful, aiming-to-please AI agents in the hands of bad actors, both stateside and abroad (think: “vibe-hacking,” romance scams, and more). AI companies say they’re ahead of the risk with the voluntary safeguards they’ve implemented. But many others say this may be a case for an external gut-check.
0 Comments
Technology
Amazon’s smart shopping cart for Whole Foods gets bigger, lighter, and adds tap-to-pay
Amazon is launching a revamped version of its smart shopping cart, which it plans to bring to dozens of Whole Foods locations by the end of this year, according to an announcement on Wednesday. The new Dash Cart features a “more responsive” item scanner that’s now located next to the built-in display, along with a new NFC reader that lets you tap to pay with your credit card or phone.
Amazon’s previous Dash Cart design put scanners beneath and in front of the handle, potentially making them harder to spot. It also only let you pay with the credit card attached to your Amazon account.
With the upgraded Dash Cart, you’ll find a new scale alongside the cart’s handle, which Amazon says “works in tandem with on-cart cameras, weight sensors, and deep learning models to ensure accurate pricing for every item.” The upgraded Dash Cart eliminates the large sensors facing inside the cart as well, offering a 40 percent larger capacity and a 25 percent lighter weight.
The Dash Cart shows an interactive map of the store on its display, similar to Instacart’s smart Caper Cart. You can sync your shopping list created with Alexa, too, and see how much you’re spending as you add more items to your cart. The cart uses built-in sensors and computer vision to detect when you’ve removed an item, allowing it to automatically update your total. When you’re done shopping, you can skip the checkout line and leave the store in a designated Dash Cart lane.
Amazon is launching its new Dash Cart as the company shakes up its grocery business, which has tied Whole Foods more closely to the Amazon brand. The company has already brought its new Dash Cart to three Whole Foods stores in McKinney, Texas; Reston, Virginia; and Westford, Massachusetts, along with two Amazon Fresh stores.
Technology
Fake error popups are spreading malware fast
NEWYou can now listen to Fox News articles!
A dangerous cybercrime tool has surfaced in underground forums, making it far easier for attackers to spread malware.
Instead of relying on hidden downloads, this tool pushes fake error messages that pressure you into fixing problems that never existed. Security researchers say this method is spreading quickly because it feels legitimate. The page looks broken. The warning feels urgent. The fix sounds simple.
That combination is proving alarmingly effective for cybercriminals.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
How fake error malware attacks actually work
These attacks begin with a compromised website. When a visitor lands on the page, something looks wrong right away. Text appears broken. Fonts look scrambled. Visual elements seem corrupted. A pop-up then appears claiming the issue can be fixed with a browser update or a missing system font. A button offers to repair the problem instantly.
Clicking that button copies a command to the clipboard and displays instructions to paste it into PowerShell or a system terminal. That single step launches the infection.
MALICIOUS CHROME EXTENSIONS CAUGHT STEALING SENSITIVE DATA
Fake error popups make a website look broken by scrambling text or fonts to create urgency and panic. (Jens Büttner/picture alliance via Getty Images)
Why this new tool changes the threat landscape
The tool behind these attacks is called ErrTraffic. It automates the entire process and removes the technical barriers that once limited cybercrime operations. For about $800, attackers get a full package with a control panel and scripted payload delivery. Analysts at the Hudson Rock Threat Intelligence Team identified the tool after tracking its promotion on Russian-language forums in early December 2025.
ErrTraffic works through a simple JavaScript injection. A single line of code connects a hacked site to the attacker’s dashboard. From there, everything adapts automatically. The script detects the operating system and browser. It then displays a customized fake error message in the correct language. The attack works across Windows, Android, macOS and Linux.
MOST PARKED DOMAINS NOW PUSH SCAMS AND MALWARE
The popups often claim a browser update or missing system font is needed to fix the problem. (Daniel Acker/Bloomberg via Getty Images)
Why security software struggles to stop it
Traditional malware defenses look for suspicious downloads or unauthorized installations. ErrTraffic avoids both. Browsers see normal text copying. Security tools see a legitimate system utility being opened manually. Nothing appears out of place. That design allows the attack to slip through protections that would normally stop malware in its tracks.
The success rate is deeply concerning
Data pulled from active ErrTraffic campaigns shows conversion rates approaching 60%. That means more than half of the visitors who see the fake error message follow the instructions and install malware. Once active, the tool can deliver infostealers like Lumma or Vidar on Windows devices. Android targets often receive banking trojans instead. The control panel even includes geographic filtering, with built-in blocks for Russia and neighboring regions to avoid drawing attention from local authorities.
What happens after infection?
Once malware is installed, credentials and session data are stolen. Those compromised logins are then used to breach additional websites. Each newly hacked site becomes another delivery vehicle for the same attack. That cycle allows the campaign to grow without direct involvement from the original operator.
FAKE WINDOWS UPDATE PUSHES MALWARE IN NEW CLICKFIX ATTACK
Following the on-screen instructions can quietly trigger malware that steals passwords and personal data. (Kurt Knutsson)
Ways to stay safe from fake error malware
A few smart habits can significantly reduce risk when facing fake error pop-ups and browser-based traps.
1) Never run commands suggested by a website
Legitimate websites never ask you to copy and paste commands into PowerShell or a system terminal. Fake error malware relies on convincing messages that pressure you into doing exactly that. If a page instructs you to run code to fix a problem, close it immediately.
2) Close pages that claim your system is corrupted
Fake error campaigns often use broken text, scrambled fonts or warnings about missing files to grab attention. As a result, these visuals create urgency and trigger fear. In reality, a real system problem never announces itself through a random website, so close the page right away.
3) Install updates only through official system settings
Real browser and operating system updates come from built-in update tools, not pop-ups on websites. If an update is needed, your device will notify you directly through system settings or trusted app stores.
4) Install strong antivirus software on every device
Strong antivirus software can help block malicious scripts, detect infostealers and stop suspicious behavior before damage spreads. This is especially important since fake error malware targets Windows, Android, macOS and Linux systems.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.
5) Use a data removal service to reduce exposure
Stolen credentials fuel the spread of fake error malware. Removing personal information from data broker sites can reduce the impact if login details are compromised and limit how far an attack can spread.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
6) Treat font and browser update pop-ups with suspicion
Claims about missing fonts or outdated browsers are a hallmark of these attacks. Modern systems manage fonts automatically, and browsers update themselves. A webpage has no reason to request manual fixes.
If a real update is needed, the operating system will request it directly. A random webpage never should.
Kurt’s key takeaways
Fake error malware works because it plays on a very human reaction. When something on a screen suddenly looks broken, most people want to fix it fast and move on. That split-second decision is exactly what attackers are counting on. Tools like ErrTraffic show how polished these scams have become. The messages look professional. The instructions feel routine. Nothing about the moment screams danger. But behind the scenes, one click can quietly hand over passwords, banking access and personal data. The good news is that slowing down makes a real difference. Closing a suspicious page and trusting built-in system updates can stop these attacks cold. When it comes to pop-ups claiming your device is broken, walking away is often the smartest fix.
Have you ever seen a pop-up or error message that made you stop and wonder if it was real? Tell us what it looked like and how you handled it by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Intel is planning a custom Panther Lake CPU for handheld PCs
Intel announced yesterday that it’s developing an entire “handheld gaming platform” powered by its new Panther Lake chips, and joining an increasingly competitive field. Qualcomm is hinting about potential Windows gaming handhelds showing up at the Game Developers Conference in March, and AMD’s new Strix Halo chips could lead to more powerful handhelds.
According to IGN and TechCrunch, sources say Intel is going to compete by developing a custom Intel Core G3 “variant or variants” just for handhelds that could outperform the Arc B390 GPU on the chips it just announced. IGN reports that by using the new 18A process, Intel can cut different die slices, and “spec the chips to offer better performance on the GPU where you want it.”
As for concrete details about the gaming platform, we’re going to have to wait. According to Intel’s Dan Rogers yesterday, the company will have “more news to share on that from our hardware and software partners later this year.” The Intel-based MSI Claw saw a marked improvement when it jumped to Lunar Lake, and hopefully the new platform keeps up that positive trend.
-
World1 week agoHamas builds new terror regime in Gaza, recruiting teens amid problematic election
-
News1 week agoFor those who help the poor, 2025 goes down as a year of chaos
-
Science1 week agoWe Asked for Environmental Fixes in Your State. You Sent In Thousands.
-
Business1 week agoA tale of two Ralphs — Lauren and the supermarket — shows the reality of a K-shaped economy
-
Detroit, MI4 days ago2 hospitalized after shooting on Lodge Freeway in Detroit
-
Politics1 week agoCommentary: America tried something new in 2025. It’s not going well
-
Politics1 week agoMarjorie Taylor Greene criticizes Trump’s meetings with Zelenskyy, Netanyahu: ‘Can we just do America?’
-
Health1 week agoRecord-breaking flu numbers reported in New York state, sparking warnings from officials