Connect with us

Technology

AI agents are science fiction not yet ready for primetime

Published

on

AI agents are science fiction not yet ready for primetime

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on all things AI, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

It all started with J.A.R.V.I.S. Yes, that J.A.R.V.I.S. The one from the Marvel movies.

Well, maybe it didn’t start with Iron Man’s AI assistant, but the fictional system definitely helped the concept of an AI agent along. Whenever I’ve interviewed AI industry folks about agentic AI, they often point to J.A.R.V.I.S. as an example of the ideal AI tool in many ways — one that knows what you need done before you even ask, can analyze and find insights in large swaths of data, and can offer strategic advice or run point on certain aspects of your business. People sometimes disagree on the exact definition of an AI agent, but at its core, it’s a step beyond chatbots in that it’s a system that can perform multistep, complex tasks on your behalf without constantly needing back-and-forth communication with you. It essentially makes its own to-do list of subtasks it needs to complete in order to get to your preferred end goal. That fantasy is closer to being a reality in many ways, but when it comes to actual usefulness for the everyday user, there are a lot of things that don’t work — and maybe will never work.

The term “AI agent” has been around for a long time, but it especially started trending in the tech industry in 2023. That was the year of the concept of AI agents; the term was on everyone’s lips as people tried to suss out the idea and how to make it a reality, but you didn’t see many successful use cases. The next year, 2024, was the year of deployment — people were really putting the code out into the field and seeing what it could do. (The answer, at the time, was… not much. And filled with a bunch of error messages.)

I can pinpoint the hype around AI agents becoming widespread to one specific announcement: In February 2024, Klarna, a fintech company, said that after one month, its AI assistant (powered by OpenAI’s tech) had successfully done the work of 700 full-time customer service agents and automated two-thirds of the company’s customer service chats. For months, those statistics came up in almost every AI industry conversation I had.

Advertisement

The hype never died down, and in the following months, every Big Tech CEO seemed to harp on the term in every earnings call. Executives at Amazon, Meta, Google, Microsoft, and a whole host of other companies began to talk about their commitment to building useful and successful AI agents — and tried to put their money where their mouths are to make it happen.

The vision was that one day, an AI agent could do everything from book your travel to generate visuals for your business presentations. The ideal tool could even, say, find a good time and place to hang out with a bunch of your friends that works with all of your calendars, food preferences, and dietary restrictions — and then book the dinner reservation and create a calendar event for everyone.

Now let’s talk about the “AI coding” of it all: For years, AI coding has been carrying the agentic AI industry. If you asked anyone about real-life, successful, not-annoying use cases for AI agents happening right now and not conceptually in a not-too-distant future, they’d point to AI coding — and that was pretty much the only concrete thing they could point to. Many engineers use AI agents for coding, and they’re seen as objectively pretty good. Good enough, in fact, that at Microsoft and Google, up to 30 percent of the code is now being written by AI agents. And for startups like OpenAI and Anthropic, which burn through cash at high rates, one of their biggest revenue generators is AI coding tools for enterprise clients.

So until recently, AI coding has been the main real-life use case of AI agents, but obviously, that’s not pandering to the everyday consumer. The vision, remember, was always a jack-of-all-trades sort of AI agent for the “everyman.” And we’re not quite there yet — but in 2025, we’ve gotten closer than we’ve ever been before.

Last October, Anthropic kicked things off by introducing “Computer Use,” a tool that allowed Claude to use a computer like a human might — browsing, searching, accessing different platforms, and completing complex tasks on a user’s behalf. The general consensus was that the tool was a step forward for technology, but reviews said that in practice, it left a lot to be desired. Fast-forward to January 2025, and OpenAI released Operator, its version of the same thing, and billed it as a tool for filling out forms, ordering groceries, booking travel, and creating memes. Once again, in practice, many users agreed that the tool was buggy, slow, and not always efficient. But again, it was a significant step. The next month, OpenAI released Deep Research, an agentic AI tool that could compile long research reports on any topic for a user, and that spun things forward, too. Some people said the research reports were more impressive in length than content, but others were seriously impressed. And then in July, OpenAI combined Deep Research and Operator into one AI agent product: ChatGPT Agent. Was it better than most consumer-facing agentic AI tools that came before? Absolutely. Was it still tough to make work successfully in practice? Absolutely.

Advertisement

So there’s a long way to go to reach that vision of an ideal AI agent, but at the same time, we’re technically closer than we’ve ever been before. That’s why tech companies are putting more and more money into agentic AI, by way of investing in additional compute, research and development, or talent. Google recently hired Windsurf’s CEO, cofounder, and some R&D team members, specifically to help Google push its AI agent projects forward. And companies like Anthropic and OpenAI are racing each other up the ladder, rung by rung, to introduce incremental features to put these agents in the hands of consumers. (Anthropic, for instance, just announced a Chrome extension for Claude that allows it to work in your browser.)

So really, what happens next is that we’ll see AI coding continue to improve (and, unfortunately, potentially replace the jobs of many entry-level software engineers). We’ll also see the consumer-facing agent products improve, likely slowly but surely. And we’ll see agents used increasingly for enterprise and government applications, especially since Anthropic, OpenAI, and xAI have all debuted government-specific AI platforms in recent months.

Overall, expect to see more false starts, starts and stops, and mergers and acquisitions as the AI agent competition picks up (and the hype bubble continues to balloon). One question we’ll all have to ask ourselves as the months go on: What do we actually want a conceptual “AI agent” to be able to do for us? Do we want them to replace just the logistics or also the more personal, human aspects of life (i.e., helping write a wedding toast or a note for a flower delivery)? And how good are they at helping with the logistics vs. the personal stuff? (Answer for that last one: not very good at the moment.)

  • Besides the astronomical environmental cost of AI — especially for large models, which are the ones powering AI agent efforts — there’s an elephant in the room. And that’s the idea that “smarter AI that can do anything for you” isn’t always good, especially when people want to use it to do… bad things. Things like creating chemical, biological, radiological, and nuclear (CBRN) weapons. Top AI companies say they’re increasingly worried about the risks of that. (Of course, they’re not worried enough to stop building.)
  • Let’s talk about the regulation of it all. A lot of people have fears about the implications of AI, but many aren’t fully aware of the potential dangers posed by uber-helpful, aiming-to-please AI agents in the hands of bad actors, both stateside and abroad (think: “vibe-hacking,” romance scams, and more). AI companies say they’re ahead of the risk with the voluntary safeguards they’ve implemented. But many others say this may be a case for an external gut-check.

0 Comments

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Xiaomi 17 is a small(ish) phone with a big(ish) battery

Published

on

Xiaomi 17 is a small(ish) phone with a big(ish) battery

Xiaomi has just given a global launch to two of its latest flagship phones, the Xiaomi 17 and 17 Ultra, along with a Leica-branded Leitzphone edition of the Ultra. There’s no sign, however, of the 17 Pro, which launched in China with an additional display mounted next to the rear cameras.

The 17 and 17 Ultra will apparently be available soon in the UK, Europe, and select other markets. The 17 — pitched as a rival to the likes of the iPhone 17 and Samsung Galaxy S26 — will cost £899 / €999 (about $1,200), while the larger and more capable Ultra starts from £1,299 / €1,499 ($1,750). The limited-edition Leitzphone will be substantially more expensive at £1,699 / €1,999 ($2,300), though it includes 16GB of RAM and 1TB of storage, along with a few extra accessories.

I like the simple, sleek aesthetic of the phone.
Photo of Xiaomi 17 homescreen on a wooden table outdoors

The 6.3-inch display isn’t tiny, but it does make the phone small by modern standards.
Closeup on Xiaomi 17 rear camera

All three of the phone’s rear cameras are 50-megapixel.

The 17 is an extremely capable small-ish flagship, with a 6.3-inch OLED display, Qualcomm Snapdragon 8 Elite Gen 5, and large 6,330mAh silicon-carbon battery (though sadly smaller than the 7,000mAh version launched in China). I won’t be writing a full review of the 17, but did spend a week using it as my main phone, and found that the battery cruised past the full-day mark, though wasn’t quite enough for two full days of my typical usage. That’s far better battery life than you’d find in similarly sized phones from Apple, Samsung, or Google.

The cameras impress too, with 50-megapixel sensors behind each of the four lenses, selfie included. Pound for pound, you won’t find many better camera systems in any phone this size.

Advertisement

1/10

I’ve been largely impressed by the Xiaomi 17’s cameras.

The Ultra, unsurprisingly, takes things to another level. It’s much larger, with a 6.9-inch display, and weighs a hefty 218g. Despite that, the 6,000mAh is actually smaller, though I found it delivered pretty similar longevity.

Photo of Xiaomi 17 and 17 Ultra on a table, closeup on the cameras

The 17 Ultra is larger in just about every respect, but strangely has a smaller battery.

The enormous camera is, as ever for Xiaomi’s Ultra phones, the highlight. There are 50-megapixel sensors for each of the main, ultrawide, and selfie cameras, with a large 1-inch-type sensor behind the primary lens. The periscope telephoto is even more impressive: 200-megapixel resolution, a large 1/1.4-inch sensor, and continuous optical zoom from 3.2x to 4.3x, the equivalent of 75-100mm. Xiaomi isn’t the first to pull off a true zoom phone — Sony’s Xperia 1 IV got there first in 2022 — but the telephoto camera here is far more capable than that phone’s, with natural bokeh and impressive performance even in low light.

Photo of Xiaomi 17 Ultra Leitzphone outdoors

This is the Leica-branded Leitzphone version of the 17 Ultra.

The camera capabilities are supported by Xiaomi’s ongoing photography partner Leica, but it’s the pair’s Leitzphone that really emphasizes that. Slightly redesigned from the 17 Ultra Leica Edition that was released in China last December, this includes Leica branding across the hardware and software, a range of Leica filters and shooting styles, and a rotatable rear camera ring that can be used to control the zoom. It’s the first Leica Leitzphone produced by Xiaomi — after a trio of Japan-only Sharp models — and comes with additional branded accessories, including a case with a lens cap and a microfiber cleaning cloth.

Xiaomi has plenty of other announcements alongside the 17 series phones at MWC this year, including a super-slim magnetic power bank, the Pad 8 and Pad 8 Pro tablets, and a smart tag that supports both Google and Apple’s tech-tracking networks.

Advertisement

Photography by Dominic Preston / The Verge

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Continue Reading

Technology

Google dismantles 9M-device Android hijack network

Published

on

Google dismantles 9M-device Android hijack network

NEWYou can now listen to Fox News articles!

Free apps are supposed to cost you nothing but storage space. But in this case, they may have cost millions of people control over their own internet connections.

Google says it has disrupted what it believes was the world’s largest residential proxy network, one that secretly hijacked around 9 million Android devices, along with computers and smart home gadgets. Most people had no idea their devices were being used since the apps worked normally, and nothing looked broken.

But behind the scenes, those devices were quietly routing traffic for strangers, including cybercriminals.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

STOP GOOGLE FROM FOLLOWING YOUR EVERY MOVE
 

Google says it disrupted a massive residential proxy network that secretly hijacked about 9 million Android and smart devices. (AaronP/Bauer-Griffin/GC Images)

How your device became part of a proxy network

According to Google’s Threat Intelligence Group, the network was tied to a company known as IPIDEA. Instead of spreading through obvious malware, it relied on hidden software development kits, or SDKs, that were embedded inside more than 600 apps. These apps ranged from simple utilities to VPN tools and other free downloads. When you installed one, the app performed its advertised function. But it also enrolled your device into a residential proxy network.

That means your phone, computer or smart device could be used as a relay point for someone else’s internet traffic. That traffic might include scraping websites, launching automated login attempts or masking the identity of someone conducting shady online activity. From the outside, it looked like that activity came from your home IP address. You wouldn’t see it happening, and in many cases, you wouldn’t notice any major performance issues.

Google says in a single seven-day period earlier this year, more than 550 separate threat groups were observed using IP addresses linked to this infrastructure. That includes cybercrime operations and state-linked actors. Residential proxy networks are attractive because they make malicious traffic look like normal consumer activity. Instead of coming from a suspicious data center, it appears to come from someone’s living room.

Advertisement

What Google did to shut it down

Google says it took legal action in a U.S. federal court to seize domains used to control the infected devices and route proxy traffic. It also worked with companies like Cloudflare and other security firms to disrupt the network’s command-and-control systems. Google claims it also updated Play Protect, the built-in Android security system, so that certified devices would automatically detect and remove apps known to include the malicious SDKs.

However, Google also warned that many of these apps were distributed outside the official Play Store. That matters because Play Protect can only scan and block threats tied to apps installed through Google Play. Third-party app stores, unofficial downloads and uncertified Android devices carry far greater risk.

IPIDEA has claimed its service was meant for legitimate business use, such as web research and data collection. But Google’s research suggests the network was heavily abused by criminals. Even if some users knowingly installed bandwidth-sharing apps in exchange for rewards, many did not receive clear disclosure about how their devices were being used.

Google’s investigation also found significant overlap between different proxy brands and SDK names. What looked like separate services were often tied to the same infrastructure. That makes it harder for consumers to know which apps are safe and which are quietly monetizing their connection.

300,000 CHROME USERS HIT BY FAKE AI EXTENSIONS
 

Advertisement

Hidden software inside more than 600 apps allegedly turned phones and computers into internet relays for cybercriminals. (David Paul Morris/Bloomberg via Getty Images)

7 ways you can protect yourself from Android proxy attacks

If millions of devices can be quietly turned into internet relay points, the big question is, how do you make sure yours isn’t one of them? These steps reduce the risk that your phone, TV box or smart device gets pulled into a proxy network without you realizing it.

1) Stick to official app stores

Only download apps from the Google Play Store or other trusted app marketplaces. Some apps hide small pieces of code that can secretly use your internet connection. These are often spread through third-party app stores or direct app files called “APKs,” which are Android app files installed manually instead of through the Play Store. When you sideload apps this way, you bypass Google’s built-in security checks. Sticking to official stores helps keep those hidden threats off your device.

2) Avoid “earn money by sharing bandwidth” apps

If an app promises rewards for sharing your unused internet bandwidth, that’s a major red flag. In many cases, that is exactly how residential proxy networks recruit devices. Even if it sounds legitimate, you are effectively renting out your IP address. That can expose you to abuse, blacklisting or deeper network vulnerabilities.

3) Review app permissions carefully

Before installing any app, check what permissions it requests. A simple wallpaper app should not need full network control or background execution privileges. After installation, go into your phone’s settings and audit which apps have constant internet access, background activity rights or special device permissions.

Advertisement

4) Install strong antivirus software

Today’s mobile security tools can detect suspicious app behavior, unusual internet activity and hidden background services. Strong antivirus software adds an extra layer of protection beyond what’s built into your device, especially if you’ve installed apps in the past that you’re unsure about. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

5) Keep your devices updated

Android security updates patch vulnerabilities that proxy operators may exploit. If you’re using an older phone, tablet or Android TV box that no longer receives updates, it may be time to upgrade. Unpatched devices are easier targets for hidden SDK abuse and botnet enrollment.

6) Use a strong password manager

If your device ever becomes part of a proxy network or is otherwise compromised, attackers often try to pivot into your accounts next. That’s why you should never reuse passwords. A password manager generates long, unique passwords for every account and stores them securely, so one breach does not unlock your email, banking or social media. Many password managers also include breach monitoring tools that alert you if your credentials appear in leaked databases, giving you a chance to act before real damage is done. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

7) Remove apps you don’t fully trust

Go through your installed apps and delete or uninstall anything you don’t recognize or haven’t used in months. The fewer apps running on your device, the fewer opportunities there are for hidden SDKs to operate. If you suspect your device has been compromised, consider a full reset and reinstall only essential apps from trusted sources.

ANDROID MALWARE HIDDEN IN FAKE ANTIVIRUS APP

Advertisement

Threat groups and state-linked actors allegedly used compromised devices to mask online activity and automate attacks. (Photo Illustration by Serene Lee/SOPA Images/LightRocket via Getty Images)

Kurt’s key takeaway

Residential proxy networks operate in a gray area that sounds harmless on paper but can quickly become a shield for cybercrime. In this case, millions of everyday devices were quietly enrolled into a system that attackers used to hide their tracks. Google’s takedown is a major move, but the broader market for residential proxies is still growing. That means you need to be cautious about what you install and what permissions you grant. Free apps are rarely truly free. Sometimes, the product being sold is you and your internet connection.

Have you ever installed an app that promised rewards for sharing bandwidth, or used a free VPN without thinking twice about it? Let us know your thoughts by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

Advertisement

Copyright 2026 CyberGuy.com.  All rights reserved.

Related Article

Stop Google from following your every move
Continue Reading

Technology

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

Published

on

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.

Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.

Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.

The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.

Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.

Advertisement

As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.

Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.

In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.

America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Advertisement
Continue Reading

Trending