Connect with us

Technology

AI agents are science fiction not yet ready for primetime

Published

on

AI agents are science fiction not yet ready for primetime

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on all things AI, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

It all started with J.A.R.V.I.S. Yes, that J.A.R.V.I.S. The one from the Marvel movies.

Well, maybe it didn’t start with Iron Man’s AI assistant, but the fictional system definitely helped the concept of an AI agent along. Whenever I’ve interviewed AI industry folks about agentic AI, they often point to J.A.R.V.I.S. as an example of the ideal AI tool in many ways — one that knows what you need done before you even ask, can analyze and find insights in large swaths of data, and can offer strategic advice or run point on certain aspects of your business. People sometimes disagree on the exact definition of an AI agent, but at its core, it’s a step beyond chatbots in that it’s a system that can perform multistep, complex tasks on your behalf without constantly needing back-and-forth communication with you. It essentially makes its own to-do list of subtasks it needs to complete in order to get to your preferred end goal. That fantasy is closer to being a reality in many ways, but when it comes to actual usefulness for the everyday user, there are a lot of things that don’t work — and maybe will never work.

The term “AI agent” has been around for a long time, but it especially started trending in the tech industry in 2023. That was the year of the concept of AI agents; the term was on everyone’s lips as people tried to suss out the idea and how to make it a reality, but you didn’t see many successful use cases. The next year, 2024, was the year of deployment — people were really putting the code out into the field and seeing what it could do. (The answer, at the time, was… not much. And filled with a bunch of error messages.)

I can pinpoint the hype around AI agents becoming widespread to one specific announcement: In February 2024, Klarna, a fintech company, said that after one month, its AI assistant (powered by OpenAI’s tech) had successfully done the work of 700 full-time customer service agents and automated two-thirds of the company’s customer service chats. For months, those statistics came up in almost every AI industry conversation I had.

Advertisement

The hype never died down, and in the following months, every Big Tech CEO seemed to harp on the term in every earnings call. Executives at Amazon, Meta, Google, Microsoft, and a whole host of other companies began to talk about their commitment to building useful and successful AI agents — and tried to put their money where their mouths are to make it happen.

The vision was that one day, an AI agent could do everything from book your travel to generate visuals for your business presentations. The ideal tool could even, say, find a good time and place to hang out with a bunch of your friends that works with all of your calendars, food preferences, and dietary restrictions — and then book the dinner reservation and create a calendar event for everyone.

Now let’s talk about the “AI coding” of it all: For years, AI coding has been carrying the agentic AI industry. If you asked anyone about real-life, successful, not-annoying use cases for AI agents happening right now and not conceptually in a not-too-distant future, they’d point to AI coding — and that was pretty much the only concrete thing they could point to. Many engineers use AI agents for coding, and they’re seen as objectively pretty good. Good enough, in fact, that at Microsoft and Google, up to 30 percent of the code is now being written by AI agents. And for startups like OpenAI and Anthropic, which burn through cash at high rates, one of their biggest revenue generators is AI coding tools for enterprise clients.

So until recently, AI coding has been the main real-life use case of AI agents, but obviously, that’s not pandering to the everyday consumer. The vision, remember, was always a jack-of-all-trades sort of AI agent for the “everyman.” And we’re not quite there yet — but in 2025, we’ve gotten closer than we’ve ever been before.

Last October, Anthropic kicked things off by introducing “Computer Use,” a tool that allowed Claude to use a computer like a human might — browsing, searching, accessing different platforms, and completing complex tasks on a user’s behalf. The general consensus was that the tool was a step forward for technology, but reviews said that in practice, it left a lot to be desired. Fast-forward to January 2025, and OpenAI released Operator, its version of the same thing, and billed it as a tool for filling out forms, ordering groceries, booking travel, and creating memes. Once again, in practice, many users agreed that the tool was buggy, slow, and not always efficient. But again, it was a significant step. The next month, OpenAI released Deep Research, an agentic AI tool that could compile long research reports on any topic for a user, and that spun things forward, too. Some people said the research reports were more impressive in length than content, but others were seriously impressed. And then in July, OpenAI combined Deep Research and Operator into one AI agent product: ChatGPT Agent. Was it better than most consumer-facing agentic AI tools that came before? Absolutely. Was it still tough to make work successfully in practice? Absolutely.

Advertisement

So there’s a long way to go to reach that vision of an ideal AI agent, but at the same time, we’re technically closer than we’ve ever been before. That’s why tech companies are putting more and more money into agentic AI, by way of investing in additional compute, research and development, or talent. Google recently hired Windsurf’s CEO, cofounder, and some R&D team members, specifically to help Google push its AI agent projects forward. And companies like Anthropic and OpenAI are racing each other up the ladder, rung by rung, to introduce incremental features to put these agents in the hands of consumers. (Anthropic, for instance, just announced a Chrome extension for Claude that allows it to work in your browser.)

So really, what happens next is that we’ll see AI coding continue to improve (and, unfortunately, potentially replace the jobs of many entry-level software engineers). We’ll also see the consumer-facing agent products improve, likely slowly but surely. And we’ll see agents used increasingly for enterprise and government applications, especially since Anthropic, OpenAI, and xAI have all debuted government-specific AI platforms in recent months.

Overall, expect to see more false starts, starts and stops, and mergers and acquisitions as the AI agent competition picks up (and the hype bubble continues to balloon). One question we’ll all have to ask ourselves as the months go on: What do we actually want a conceptual “AI agent” to be able to do for us? Do we want them to replace just the logistics or also the more personal, human aspects of life (i.e., helping write a wedding toast or a note for a flower delivery)? And how good are they at helping with the logistics vs. the personal stuff? (Answer for that last one: not very good at the moment.)

  • Besides the astronomical environmental cost of AI — especially for large models, which are the ones powering AI agent efforts — there’s an elephant in the room. And that’s the idea that “smarter AI that can do anything for you” isn’t always good, especially when people want to use it to do… bad things. Things like creating chemical, biological, radiological, and nuclear (CBRN) weapons. Top AI companies say they’re increasingly worried about the risks of that. (Of course, they’re not worried enough to stop building.)
  • Let’s talk about the regulation of it all. A lot of people have fears about the implications of AI, but many aren’t fully aware of the potential dangers posed by uber-helpful, aiming-to-please AI agents in the hands of bad actors, both stateside and abroad (think: “vibe-hacking,” romance scams, and more). AI companies say they’re ahead of the risk with the voluntary safeguards they’ve implemented. But many others say this may be a case for an external gut-check.

0 Comments

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Amazon Health AI brings a doctor to your pocket

Published

on

Amazon Health AI brings a doctor to your pocket

NEWYou can now listen to Fox News articles!

Most people have had this moment. You feel a strange symptom, open your phone and start searching online. Within minutes, you are deep in medical forums reading worst-case scenarios. By the end, you are either terrified or more confused than when you started.

Health care should feel clearer than that. Yet for many of us, it rarely does. Appointments take weeks. Medical records are hard to understand. You often have to repeat the same health history at every visit. Insurance rules feel like a maze.

According to the American Academy of Physician Associates, many Americans say navigating the healthcare system feels overwhelming and they wish doctors had more time to listen. Now, a new tool from Amazon hopes to change that experience. It is called Amazon Health AI.

Sign up for my FREE CyberGuy Report

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

$163K IN FAKE MEDICAL BILL CHARGES, AI UNCOVERS IT FOR YOU

Amazon Health AI lets you ask health questions, review records and connect with care directly through the Amazon app. (Kurt “CyberGuy” Knutsson)

What Amazon Health AI actually does

Amazon Health AI, available at amazon.com/health-ai, acts as a digital health assistant that can answer medical questions and help guide you through your care. The tool lives inside the Amazon app and website.

You start by typing a health question into a chat box. From there, the system can:

Advertisement
  • Explain lab results in plain language
  • Review symptoms and suggest next steps
  • Help schedule care with a provider
  • Assist with prescription renewals
  • Recommend relevant health products if asked

Health AI connects directly with clinicians from Amazon One Medical when professional care is needed. You can message a provider, start a video visit or schedule an in-person appointment. The goal is to make getting care simpler. Instead of spending time searching for appointments or jumping between different apps, you can move from a question to a provider more quickly. If symptoms suggest a possible emergency, the system may advise you to contact emergency services, such as calling 911.

Amazon is gradually rolling the Health AI tool out to U.S. customers, and availability varies by location.

CyberGuy reached out to Amazon for comment about the new service. Andrew Diamond, Ph.D., M.D., chief medical officer at Amazon One Medical, said the goal is to reduce some of the everyday frustrations people face when navigating healthcare.

“Nearly two-thirds of Americans feel overwhelmed by the healthcare system and wish their doctors had more time to understand their concerns,” Diamond said. “Health AI is designed to handle the logistical and informational work that creates friction in healthcare, so patients and providers can spend more time on what matters most: the human relationship at the heart of healing.”

How Amazon Health AI uses your medical history

Health AI becomes more useful when it understands your medical history.

With permission, the system can access information such as:

Advertisement
  • Past diagnoses
  • Medications
  • Lab results
  • Doctor’s notes

This data flows through a secure national network called the Health Information Exchange. Health AI can access records from hundreds of thousands of providers nationwide once permission is granted.

For example, imagine someone with asthma develops a cough during flu season. A generic search might treat that symptom like any other cough. Health AI can look at your history and ask follow-up questions based on your specific risk factors.

Health AI can provide general information about someone else’s health question, but personalized answers are limited to the medical history of the account holder.

That context helps the system provide more relevant guidance. Still, the assistant does not replace doctors. When the situation requires medical judgment, it connects you with a real clinician.

CHATGPT COULD MISS YOUR SERIOUS MEDICAL EMERGENCY, NEW STUDY SUGGESTS

Health AI can help explain lab results, check symptoms and connect you with care through your phone. (Amazon)

Advertisement

How Amazon connects AI with real medical care

The service works closely with Amazon One Medical providers. Prescription renewals can also move through the system, with requests sent to a One Medical provider who reviews the request before approval. You can fill prescriptions through Amazon Pharmacy or another pharmacy you prefer. This approach helps reduce the steps people often face when trying to get care. Instead of spending time searching for appointments or jumping between different apps, you can move from a question to a provider more quickly.

Special access for Prime members

Amazon is also adding a limited introductory benefit. Eligible members of Amazon Prime can receive up to five free message-based consultations with a One Medical provider.

Neil Lindsay, senior vice president of Amazon Health Services, said the goal is to make care easier to access through the tools people already use. “Eligible Prime member accounts get up to five free direct message care consultations with a One Medical provider for any of the 30 common conditions,” Lindsay said.

These visits cover common conditions, including:

  • Colds and flu
  • Allergies and acid reflux
  • Pink eye and UTIs
  • Hair loss and skin care

Outside the promotion, message or telehealth visits typically cost about $29. A full One Medical membership provides broader virtual care and costs less for Prime members than for non-members.

How Amazon says it protects health data

Health information raises serious privacy questions. Amazon says Health AI runs inside a HIPAA-compliant environment with strong encryption and strict access controls. According to the company, personal health data is not used to sell ads. Amazon also says protected health information from One Medical and Amazon Pharmacy is not used for advertising or sold to third parties.

Advertisement

The system also includes safety guardrails. If the AI cannot confidently answer a question, it directs you to a human provider. Behind the scenes, the technology runs on Amazon’s AI platform called Amazon Bedrock.

Amazon also emphasized that Health AI was designed alongside medical professionals rather than built purely as a technology product.

“This isn’t a chatbot with a healthcare skin,” said Prakash Bulusu, chief technology officer at Amazon Health Services. “It’s a system designed from the ground up to be personalized, trustworthy and useful.”

Bulusu said he personally tested the system with his own health data, and it surfaced lab work he had forgotten to complete after a physical exam.

CHATGPT HEALTH PROMISES PRIVACY FOR HEALTH CONVERSATIONS

Advertisement

You can ask Health AI about symptoms and receive guidance before deciding whether to seek medical care.  (Amazon)

Why Amazon believes AI belongs in healthcare

Millions of people already search Amazon for vitamins, blood pressure monitors and health products. The company believes AI can help guide those searches and connect them with medical advice. Amazon also partnered with major health systems, including the Cleveland Clinic and Rush University System for Health, to create smoother referrals between primary care and specialists. The idea is continuity. You should not feel like you are starting from scratch every time you see a new provider.

What this means for you

Tools like Health AI show how quickly artificial intelligence is moving into everyday health decisions. For patients, the potential benefits are clear. Faster answers. Simpler records. Easier access to doctors.

Yet it also raises big questions about privacy, data control and how much we rely on automated systems for health advice. AI can help people understand their health. But the human doctor still plays the absolute most important role. The challenge will be finding the right balance.

Take my quiz: How safe is your online security?

Advertisement

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.       

Kurt’s key takeaways

Healthcare can be frustrating. Long waits, confusing records and disconnected systems often leave you feeling lost. Amazon believes AI can help guide you through that process. If the technology works as promised, it could help millions of us understand our health faster and reach care sooner. Still, any system that handles sensitive medical information must earn trust over time. That trust will depend on transparency, security and how responsibly companies use personal health data.

Would you feel comfortable letting an AI assistant review your medical history and guide your health decisions? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

Crimson Desert dev apologizes for use of AI art

Published

on

Crimson Desert dev apologizes for use of AI art

Reviews of Crimson Desert have been mixed, but the bigger issue for the game has been the discovery of what appeared to be AI-generated assets in the final release. Now the developer has acknowledged that AI art was indeed used during the game’s creation, but says that it was intended to be replaced before release. In a statement on X, the company said it was conducting a “comprehensive audit” to identify and replace any AI-generated content.

The company apologized for both its inclusion in the final release and for not being more transparent about its use during development. “We should have clearly disclosed our use of AI,” it said.

The use of generative AI in gaming has become a hot-button issue of the last couple of years as it’s made its way into several high-profile titles. While some large studios have embraced it, many smaller developers have revolted against the trend, proudly proclaiming their games to be “AI free.”

Continue Reading

Technology

YouTube job scam text: How to spot it fast

Published

on

YouTube job scam text: How to spot it fast

NEWYou can now listen to Fox News articles!

Most of us have received a random text that makes us pause for a second. Maybe it promises a prize. Maybe it claims to be from a delivery company. Lately, another type of message is spreading quickly: the remote job scam.

That is exactly what happened to Peter from New York. He wrote in after receiving a suspicious message about a high-paying YouTube job.

Here is what he sent:

“I received this text today, and I think it’s a scam. How can I tell for sure, and what do I do next?”

Advertisement

Below is the message Peter received. At first glance, it looks like a job opportunity. However, when you break it down line by line, several warning signs appear. Let’s walk through them.

Sign up for my FREE CyberGuy Report

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

FAKE GOOGLE SECURITY PAGE CAN TURN YOUR BROWSER INTO A SPYING TOOL

A suspicious text message promises up to $10,000 a month for boosting YouTube video views. Offers like this are a common sign of a job scam.  (Kurt “CyberGuy” Knutsson)

Advertisement

Red flag 1: A random job offer from a stranger

The text comes from an unknown international phone number starting with +63, which is the country code for the Philippines. Legitimate companies rarely recruit through random text messages from unknown numbers. Real employers usually contact candidates through job platforms, email or professional networks like LinkedIn. When a job appears out of nowhere and promises high pay, it should immediately raise suspicion.

Red flag 2: The pay is wildly unrealistic

The message claims:

  • $200 to $600 per day
  • $10,000 or more per month

Those numbers are a major warning sign. Entry-level remote work, such as “boosting video views” or “YouTube optimization,” does not pay anywhere near that range. Scammers often use unusually high pay to trigger excitement and urgency. When money sounds too good to be true, it usually is.

Red flag 3: No experience required but huge income

The text says “no experience required, free paid training provided.” Scammers often combine high income with zero qualifications. That combination is designed to attract as many people as possible.

Real digital marketing jobs usually require:

  • SEO or marketing experience
  • Analytics knowledge
  • Platform expertise

A company offering $10K per month with no requirements is not realistic.

BE AWARE OF EXTORTION SCAM EMAILS CLAIMING YOUR DATA IS STOLEN

Advertisement

Scammers often claim no experience is required and that training is provided. The goal is to lure you in quickly before you start asking questions.  (Kurt “CyberGuy” Knutsson)

Red flag 4: The job description is vague

The text claims the job is to “increase video exposure and view count.”

That description is extremely vague. It does not explain:

  • What tools you would use
  • What company you would work for
  • How the work is measured

Scam job offers often stay vague so they can adapt the story later.

Red flag 5: Pressure to respond immediately

The message says: “5 urgent openings available, first come first served.” This is a classic scam tactic. Urgency pushes people to respond quickly before they have time to research the offer. Real companies rarely hire qualified candidates on a first-come basis through text messages.

Red flag 6: The strange reply instructions

The message tells recipients to reply “OK” and then send a numeric code. This step is often used to move the conversation to another messaging platform, such as Telegram or WhatsApp, where scammers continue the scheme. Once the conversation moves there, victims may be asked to:

Advertisement
  • Complete fake tasks
  • Send cryptocurrency
  • Pay deposits for “training”

These scams are often called task scams, where victims complete simple online tasks and may even receive small payments at first before scammers demand larger deposits for payouts that never come. They have exploded worldwide over the past few years.

Red flag 7: No company information

The message never names a real company. It mentions a “manager” named Goldie but provides:

  • No company website
  • No corporate email
  • No office address

Legitimate employers want applicants to know who they are. Scammers avoid details that can be verified.

How these YouTube job scams usually work

Many of these scams follow the same pattern. First, scammers promise easy money for simple tasks lsuch as liking videos or boosting views. At the beginning, they may even send a small payment to build trust. Then things change. Victims are asked to deposit money to unlock larger payouts or complete “premium tasks.” Once payments are sent, the scammers disappear. The Federal Trade Commission says Americans lost hundreds of millions of dollars to job scams in recent years, and text message recruitment scams are rising fast.

 Google warns about growing job scams and how to verify recruiters

We reached out to Google, and a spokesperson provided the following statement to CyberGuy:

“Google is aware of these job scams happening across the industry and believes they’re growing around the world. We strongly encourage any candidate, or individual receiving them, to exercise caution and report it to the platform you received it on as a phishing attempt and/or spam. Our recruiting team focuses on contacting candidates in official capacities and are very clear about who we are, why we’re reaching out, and do so from legitimate emails or profiles on job sites. Jobseekers should verify anyone contacting them by email addresses, looking up the person online, such as on LinkedIn, and if something does seem suspicious, flag it to the outlet where it was received. Folks can also vet and report these scams to Google at support.google.com. Our Google careers page reflects all of our current job postings, so candidates should check offers against those. Generally speaking, Google also continues to offer a range of tools and insights that help people automatically spot and avoid scams like these whether they receive them via email, search results, text messages, etc.”

Advertisement

FAKE GOOGLE GEMINI AI PUSHES ‘GOOGLE COIN’ CRYPTO SCAM

Messages that push you to reply immediately or move the conversation to apps like Telegram or WhatsApp are a major red flag.  (Kurt “CyberGuy” Knutsson)

Ways to stay safe from job text scams

If you receive a message like Peter’s, here are some smart steps to take.

1) Never respond to unknown job texts

Replying confirms your number is active. That can lead to more scam messages.

2) Do not click links or download attachments

Scam texts sometimes include links that lead to phishing pages designed to steal login credentials or financial information. Install strong antivirus software on your devices, which can help detect malicious links, block dangerous websites and warn you before you open something risky. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

Advertisement

3) Reduce how easily scammers can find your information

Scammers often harvest phone numbers and personal details from data broker sites and public profiles. Using a data removal service to remove your information from these sites can make it harder for criminals to target you with job scams and other fraud. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

4) Research the company independently

Search for the company name online. Look for an official website, verified social media or job listings.

5) Avoid jobs that ask for money

Legitimate employers never require deposits for training, equipment or task access.

6) Block and report the number

You can report scam texts directly from your phone.

On iPhone:

Advertisement

Open the message, tap the phone number at the top of the screen, scroll down and select Block Contact. You can also tap Report Spam under the message. If the option appears, then click Delete and Report Spam, which sends the report to Apple and deletes the message.

On Samsung Galaxy phones:

Steps may vary slightly depending on your Samsung model and software version.

Open the Messages app and select the conversation. Tap the three-dot menu in the upper right corner, then tap Block and report spam, then confirm by tapping Yes. This blocks the number and helps Samsung identify and filter future scam messages.

7) Report it to the FTC

In the United States, you can report scams at reportfraud.ftc.gov. Reports help investigators track large scam networks.

Advertisement

So what should Peter do next?

The safest move is simple. Peter should not reply to the message. Instead, he should block the number and report it as spam. If he has already responded, he should stop communicating immediately and avoid clicking any links or sending money. If he shared personal information such as his phone number, email address or financial details, it may also be wise to monitor his accounts closely and consider signing up for an identity theft protection service. The good news is that spotting the red flags early can prevent a much bigger problem later. See my tips and best picks on Best Identity Theft Protection at Cyberguy.com.

Kurt’s key takeaways 

Scammers constantly adapt their tactics. Today, it might be a fake delivery notice. Tomorrow, it might be a high-paying remote job. The message Peter received hits many of the classic warning signs: unrealistic pay, vague job duties, urgent language and a request to reply quickly. When a stranger promises easy money through a random text message, pause for a moment. That short pause can save you a lot of trouble.

Now I am curious. If a text suddenly promised you $10,000 a month for simple online tasks, would you recognize the warning signs before replying? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading

Trending