Connect with us

Technology

OpenAI admits AI browsers face unsolvable prompt attacks

Published

on

OpenAI admits AI browsers face unsolvable prompt attacks

NEWYou can now listen to Fox News articles!

Cybercriminals don’t always need malware or exploits to break into systems anymore. Sometimes, they just need the right words in the right place. OpenAI is now openly acknowledging that reality. The company says prompt injection attacks against artificial intelligence (AI)-powered browsers are not a bug that can be fully patched, but a long-term risk that comes with letting AI agents roam the open web. This raises uncomfortable questions about how safe these tools really are, especially as they gain more autonomy and access to your data.

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY

Advertisement

AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions attackers can slip into pages or documents. (Kurt “CyberGuy” Knutsson)

Why prompt injection isn’t going away

In a recent blog post, OpenAI admitted that prompt injection attacks are unlikely to ever be completely eliminated. Prompt injection works by hiding instructions inside web pages, documents or emails in ways that humans don’t notice, but AI agents do. Once the AI reads that content, it can be tricked into following malicious instructions.

OpenAI compared this problem to scams and social engineering. You can reduce them, but you can’t make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more an AI can do on your behalf, the more damage it can cause when something goes wrong.

OpenAI launched the ChatGPT Atlas browser in October, and security researchers immediately started testing its limits. Within hours, demos appeared showing that a few carefully placed words inside a Google Doc could influence how the browser behaved. That same day, Brave published its own warning, explaining that indirect prompt injection is a structural problem for AI-powered browsers, including tools like Perplexity’s Comet.

This isn’t just OpenAI’s problem. Earlier this month, the National Cyber Security Centre in the U.K. warned that prompt injection attacks against generative AI systems may never be fully mitigated.

Advertisement

FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

Prompt injection attacks exploit trust at scale, allowing malicious instructions to influence what an AI agent does without the user ever seeing it.  (Kurt “CyberGuy” Knutsson)

The risk trade-off with AI browsers

OpenAI says it views prompt injection as a long-term security challenge that requires constant pressure, not a one-time fix. Its approach relies on faster patch cycles, continuous testing, and layered defenses. That puts it broadly in line with rivals like Anthropic and Google, which have both argued that agentic systems need architectural controls and ongoing stress testing.

Where OpenAI is taking a different approach is with something it calls an “LLM-based automated attacker.” In simple terms, OpenAI trained an AI to act like a hacker. Using reinforcement learning, this attacker bot looks for ways to sneak malicious instructions into an AI agent’s workflow.

The bot runs attacks in simulation first. It predicts how the target AI would reason, what steps it would take and where it might fail. Based on that feedback, it refines the attack and tries again. Because this system has insight into the AI’s internal decision-making, OpenAI believes it can surface weaknesses faster than real-world attackers.

Advertisement

Even with these defenses, AI browsers aren’t safe. They combine two things attackers love: autonomy and access. Unlike regular browsers, they don’t just display information, but also read emails, scan documents, click links and take actions on your behalf. That means a single malicious prompt hidden in a webpage, document or message can influence what the AI does without you ever seeing it. Even when safeguards are in place, these agents operate by trusting content at scale, and that trust can be manipulated.

THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

As AI browsers gain more autonomy and access to personal data, limiting permissions and keeping human confirmation in the loop becomes critical for safety. (Kurt “CyberGuy” Knutsson)

7 steps you can take to reduce risk with AI browsers

You may not be able to eliminate prompt injection attacks, but you can significantly limit their impact by changing how you use AI tools.

1) Limit what the AI browser can access

Only give an AI browser access to what it absolutely needs. Avoid connecting your primary email account, cloud storage or payment methods unless there’s a clear reason. The more data an AI can see, the more valuable it becomes to attackers. Limiting access reduces the blast radius if something goes wrong.

Advertisement

2) Require confirmation for every sensitive action

Never allow an AI browser to send emails, make purchases or modify account settings without asking you first. Confirmation breaks long attack chains and gives you a moment to spot suspicious behavior. Many prompt injection attacks rely on the AI acting quietly in the background without user review.

3) Use a password manager for all accounts

A password manager ensures every account has a unique, strong password. If an AI browser or malicious page leaks one credential, attackers can’t reuse it elsewhere. Many password managers also refuse to autofill on unfamiliar or suspicious sites, which can alert you that something isn’t right before you manually enter anything.

Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Check out the best expert-reviewed password managers of 2025 at Cyberguy.com

4) Run strong antivirus software on your device

Even if an attack starts inside the browser, antivirus software can still detect suspicious scripts, unauthorized system changes or malicious network activity. Strong antivirus software focuses on behavior, not just files, which is critical when dealing with AI-driven or script-based attacks.

Advertisement

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

5) Avoid broad or open-ended instructions

Telling an AI browser to “handle whatever is needed” gives attackers room to manipulate it through hidden prompts. Be specific about what the AI is allowed to do and what it should never do. Narrow instructions make it harder for malicious content to influence the agent.

6) Be careful with AI summaries and automated scans

When an AI browser scans emails, documents or web pages for you, remember that hidden instructions can live inside that content. Treat AI-generated actions as drafts or suggestions, not final decisions. Review anything the AI plans to act on before approving it.

7) Keep your browser, AI tools and operating system updated

Security fixes for AI browsers evolve quickly as new attack techniques emerge. Delaying updates leaves known weaknesses open longer than necessary. Turning on automatic updates ensures you get protection as soon as they’re available, even if you miss the announcement.

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt’s key takeaway

There’s been a meteoric rise in AI browsers. We’re now seeing them from major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia, and Perplexity’s Comet. Even existing browsers like Chrome and Edge are pushing hard to add AI and agentic features into their current infrastructure. While these browsers can be useful, the technology is still early. It’s best not to fall for the hype and to wait for it to mature.

Do you think AI browsers are worth the risk today, or are they moving faster than security can keep up? Let us know by writing to us at Cyberguy.com

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Advertisement

Copyright 2025 CyberGuy.com.  All rights reserved.

Technology

Microsoft starts removing Copilot buttons from Windows 11 apps

Published

on

Microsoft starts removing Copilot buttons from Windows 11 apps

Microsoft is starting to remove “unnecessary” Copilot buttons from its Windows 11 apps. In the latest version of the Notepad app for Windows Insiders, Microsoft has removed the Copilot button in favor of a “writing tools” menu. The Copilot button in the Snipping Tool app also no longer appears when you select an area to capture.

The change is part of “reducing unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets and Notepad,” that Microsoft promised to complete as part of its broader plan to fix Windows 11. While Copilot buttons are being removed, it looks like the underlying AI features are here to stay, though.

The Copilot button has been removed from Notepad, but the writing tools replacement still uses AI-powered features and looks like the identical menu of options that existed before. I still think these features are largely unnecessary in what’s supposed to be a lightweight text app, but removing the superfluous Copilot branding is a good first step.

Continue Reading

Technology

AI chatbots refilling psych meds sparks debate

Published

on

AI chatbots refilling psych meds sparks debate

NEWYou can now listen to Fox News articles!

If you have ever waited weeks just to renew a mental health prescription, you already know how frustrating the system can feel. Now imagine handling that refill through a chatbot instead of a doctor.

That kind of thing is already starting to happen. In Utah, a new pilot program is allowing an artificial intelligence system from Legion Health to renew certain psychiatric medications without direct approval from a physician each time. State officials say this could speed things up and reduce costs.

Many psychiatrists are not convinced. They are asking whether this actually solves the problem it claims to fix.

Sign up for my FREE CyberGuy Report

Advertisement
  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

AMAZON HEALTH AI BRINGS A DOCTOR TO YOUR POCKET
 

Utah launches AI chatbot to renew select psychiatric prescriptions, raising questions about safety and oversight. (pocketlight/Getty Images)

How the AI prescription system works

Before this starts sounding like a robot psychiatrist, the program stays tightly limited. The AI only renews a short list of lower-risk medications that a doctor has already prescribed. These include commonly used antidepressants like Prozac, Zoloft and Wellbutrin. 

To qualify, patients must meet strict requirements. You need to be stable on your current medication. Recent dosage changes or a psychiatric hospitalization will disqualify you. You also need to check in with a healthcare provider after a set number of refills or within a certain time frame.

During the process, the chatbot asks about symptoms, side effects and warning signs such as suicidal thoughts. If anything raises concern, it sends the case to a real doctor before approving a refill. According to an agreement filed with Utah’s Office of Artificial Intelligence Policy, the pilot includes strict safeguards, including human review thresholds and automatic escalation for higher-risk cases. The system cannot prescribe new medications or manage drugs that require close monitoring. As a result, it leaves out many complex conditions from the pilot.

Why some experts are pushing back

Even with those guardrails, many psychiatrists are uneasy. Brent Kious, a psychiatrist and professor at the University of Utah School of Medicine, has questioned whether AI systems like this actually solve the access problem they are designed to address. 

Advertisement

He has suggested that the benefits of an AI-based refill system may be overstated, especially since patients must already be stable and under care to qualify. Kious has also raised concerns about how much these systems rely on self-reported answers. Patients may not recognize side effects, may answer inaccurately, or may adjust their responses to get the outcome they want. 

He has further questioned whether current AI tools can safely handle even routine parts of psychiatric care, noting that treatment decisions often depend on factors that go beyond simple screening questions. He has also pointed to a lack of transparency in how these systems operate, which can make it harder for doctors and patients to fully trust them. 

HEALTHCARE DATA BREACH HITS SYSTEM STORING PATIENT RECORDS
 

A new pilot program allows AI to handle some mental health medication refills without direct doctor approval. (Sezeryadigar/Getty Images)

The promise behind the technology

Supporters of the program are focused on access. A lot of people in Utah still struggle to get mental health care. Wait times can stretch for weeks. In some areas, there simply are not enough providers available. The idea is that AI can take care of routine refill requests so doctors have more time to focus on patients with more complex needs. That could help take some pressure off the system. Legion Health is also leaning into convenience. The service is expected to cost about $19 a month and is designed to make refills quicker and easier for patients who qualify. From a big-picture view, that could help. From a patient’s point of view, the tradeoff may feel a little more complicated. We reached out to Legion Health for comment, but did not hear back before our deadline.

Advertisement

What this means to you

If you rely on mental health medication, this kind of system could change how you manage your care. You may be able to get refills more quickly if your condition is stable and your treatment plan is not changing. At the same time, this does not replace your doctor. It does not handle new diagnoses or complex decisions. It also adds another layer between you and your care. Instead of a conversation, you are interacting with a system that depends on how you answer a series of questions. Mental health treatment often depends on small details. Changes in mood, sleep or behavior can matter more than a simple yes or no response. That is where some experts believe human care still has a clear advantage.

The bigger question about AI in healthcare

This pilot is only one step in a much larger shift. Utah is already experimenting with AI in other areas of healthcare. Companies like Legion are signaling plans to expand beyond a single state. What starts with simple refills could eventually move into more complex decisions. That is where the conversation becomes more urgent. Is this a practical way to improve access to care, or does it risk reducing something deeply personal into a transaction driven by software?

HOW ARTIFICIAL INTELLIGENCE IS TRANSFORMING HEALTHCARE
 

Psychiatrists question whether AI prescription refills address access issues or create new risks for patients. (SDI Productions/Getty Images)

Take my quiz: How safe is your online security?

Advertisement

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

There is no question that access to mental health care needs improvement. Long wait times and limited availability are real problems that affect millions of people. AI may help in specific situations, especially when the task is routine and the patient is stable. Still, convenience should not be confused with quality. For now, this system is narrow in scope and closely monitored. That makes it easier to test. It also highlights how early we are in this transition. The technology will continue to evolve. The real question is whether the safeguards, oversight and transparency will evolve at the same pace.

Would you feel comfortable letting a chatbot handle part of your mental health care, or is that a line you do not want technology to cross? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report

Advertisement
  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com. All rights reserved.

Continue Reading

Technology

ChatGPT has a new $100 per month Pro subscription

Published

on

ChatGPT has a new 0 per month Pro subscription

OpenAI has announced a new version of its ChatGPT Pro subscription that costs $100 per month. The new Pro tier offers “5x more” usage of its Codex coding tool than the $20 per month Plus subscription and “is best for longer, high-effort Codex sessions,” OpenAI says.

The company is introducing the new tier as it tries to win over users from Anthropic and its popular Claude Code tool. ChatGPT’s $100 per month option will directly compete with Anthropic’s “Max” tier for Claude, which costs the same price. It also offers a middle ground between the $20 per month Plus tier and the $200 version of the Pro tier.

(Yes, there are now two tiers of “Pro”; while the new tier “still offers access to all Pro features,” OpenAI says that the more expensive one has even higher usage limits.)

According to OpenAI, ChatGPT Plus will “will continue to be the best offer at $20 for steady, day-to-day usage of Codex, and the new $100 Pro tier offers a more accessible upgrade path for heavier daily use.” OpenAI also offers an $8 per month Go tier and a free tier.

Continue Reading

Trending