Technology
OpenAI admits AI browsers face unsolvable prompt attacks
NEWYou can now listen to Fox News articles!
Cybercriminals don’t always need malware or exploits to break into systems anymore. Sometimes, they just need the right words in the right place. OpenAI is now openly acknowledging that reality. The company says prompt injection attacks against artificial intelligence (AI)-powered browsers are not a bug that can be fully patched, but a long-term risk that comes with letting AI agents roam the open web. This raises uncomfortable questions about how safe these tools really are, especially as they gain more autonomy and access to your data.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY
AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions attackers can slip into pages or documents. (Kurt “CyberGuy” Knutsson)
Why prompt injection isn’t going away
In a recent blog post, OpenAI admitted that prompt injection attacks are unlikely to ever be completely eliminated. Prompt injection works by hiding instructions inside web pages, documents or emails in ways that humans don’t notice, but AI agents do. Once the AI reads that content, it can be tricked into following malicious instructions.
OpenAI compared this problem to scams and social engineering. You can reduce them, but you can’t make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more an AI can do on your behalf, the more damage it can cause when something goes wrong.
OpenAI launched the ChatGPT Atlas browser in October, and security researchers immediately started testing its limits. Within hours, demos appeared showing that a few carefully placed words inside a Google Doc could influence how the browser behaved. That same day, Brave published its own warning, explaining that indirect prompt injection is a structural problem for AI-powered browsers, including tools like Perplexity’s Comet.
This isn’t just OpenAI’s problem. Earlier this month, the National Cyber Security Centre in the U.K. warned that prompt injection attacks against generative AI systems may never be fully mitigated.
FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE
Prompt injection attacks exploit trust at scale, allowing malicious instructions to influence what an AI agent does without the user ever seeing it. (Kurt “CyberGuy” Knutsson)
The risk trade-off with AI browsers
OpenAI says it views prompt injection as a long-term security challenge that requires constant pressure, not a one-time fix. Its approach relies on faster patch cycles, continuous testing, and layered defenses. That puts it broadly in line with rivals like Anthropic and Google, which have both argued that agentic systems need architectural controls and ongoing stress testing.
Where OpenAI is taking a different approach is with something it calls an “LLM-based automated attacker.” In simple terms, OpenAI trained an AI to act like a hacker. Using reinforcement learning, this attacker bot looks for ways to sneak malicious instructions into an AI agent’s workflow.
The bot runs attacks in simulation first. It predicts how the target AI would reason, what steps it would take and where it might fail. Based on that feedback, it refines the attack and tries again. Because this system has insight into the AI’s internal decision-making, OpenAI believes it can surface weaknesses faster than real-world attackers.
Even with these defenses, AI browsers aren’t safe. They combine two things attackers love: autonomy and access. Unlike regular browsers, they don’t just display information, but also read emails, scan documents, click links and take actions on your behalf. That means a single malicious prompt hidden in a webpage, document or message can influence what the AI does without you ever seeing it. Even when safeguards are in place, these agents operate by trusting content at scale, and that trust can be manipulated.
THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS
As AI browsers gain more autonomy and access to personal data, limiting permissions and keeping human confirmation in the loop becomes critical for safety. (Kurt “CyberGuy” Knutsson)
7 steps you can take to reduce risk with AI browsers
You may not be able to eliminate prompt injection attacks, but you can significantly limit their impact by changing how you use AI tools.
1) Limit what the AI browser can access
Only give an AI browser access to what it absolutely needs. Avoid connecting your primary email account, cloud storage or payment methods unless there’s a clear reason. The more data an AI can see, the more valuable it becomes to attackers. Limiting access reduces the blast radius if something goes wrong.
2) Require confirmation for every sensitive action
Never allow an AI browser to send emails, make purchases or modify account settings without asking you first. Confirmation breaks long attack chains and gives you a moment to spot suspicious behavior. Many prompt injection attacks rely on the AI acting quietly in the background without user review.
3) Use a password manager for all accounts
A password manager ensures every account has a unique, strong password. If an AI browser or malicious page leaks one credential, attackers can’t reuse it elsewhere. Many password managers also refuse to autofill on unfamiliar or suspicious sites, which can alert you that something isn’t right before you manually enter anything.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com
4) Run strong antivirus software on your device
Even if an attack starts inside the browser, antivirus software can still detect suspicious scripts, unauthorized system changes or malicious network activity. Strong antivirus software focuses on behavior, not just files, which is critical when dealing with AI-driven or script-based attacks.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
5) Avoid broad or open-ended instructions
Telling an AI browser to “handle whatever is needed” gives attackers room to manipulate it through hidden prompts. Be specific about what the AI is allowed to do and what it should never do. Narrow instructions make it harder for malicious content to influence the agent.
6) Be careful with AI summaries and automated scans
When an AI browser scans emails, documents or web pages for you, remember that hidden instructions can live inside that content. Treat AI-generated actions as drafts or suggestions, not final decisions. Review anything the AI plans to act on before approving it.
7) Keep your browser, AI tools and operating system updated
Security fixes for AI browsers evolve quickly as new attack techniques emerge. Delaying updates leaves known weaknesses open longer than necessary. Turning on automatic updates ensures you get protection as soon as they’re available, even if you miss the announcement.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaway
There’s been a meteoric rise in AI browsers. We’re now seeing them from major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia, and Perplexity’s Comet. Even existing browsers like Chrome and Edge are pushing hard to add AI and agentic features into their current infrastructure. While these browsers can be useful, the technology is still early. It’s best not to fall for the hype and to wait for it to mature.
Do you think AI browsers are worth the risk today, or are they moving faster than security can keep up? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Valve’s huge SteamOS 3.8 update adds long-awaited features — and supports Steam Machine
Not only is it the first release to support the upcoming Steam Machine living room gaming PC, it comes with long-awaited features for Valve’s handhelds and more support for other companies’ handhelds than we’ve seen to date — including Microsoft and Asus’ Xbox Ally series, the Lenovo Legion Go 2, the OneXPlayer X1, and additional support for MSI, GPD, Anbernic, OrangePi, and Zotac.
The one that excites me most: Valve is adding genuine hibernation and “memory power down” modes to the Steam Deck — though just the LCD model to start — which should help extend battery life when you hit the power button or leave them idle. Some Windows machines currently last longer than the Steam Deck when asleep, because they self-hibernate to save power, while the Steam Deck has an instant-on sleep mode.
Plus, Valve has finally added a setting in its gaming mode to let you use your Bluetooth headset microphones — something I’ve been asking for since the beginning. (Valve did add it to the Linux desktop mode last year.) And the Steam Deck LCD is finally getting Bluetooth Wake re-enabled, so you can turn on your TV-connected Deck with a wireless controller from your couch.
The update comes with all sorts of improvements for the Linux desktop modes that sound like they’ll come in handy on a Steam Machine plugged into a TV or monitor, too, including desktop HDR, VRR display support, per-display scaling, “improved windowing behavior for games running in Proton,” and an upgrade to KDE Plasma 6.4.3 among other things.
And for a Steam Machine or Steam handheld plugged into a home entertainment system, they can now detect how many audio channels you have over HDMI to enable surround sound. (I believe surround sound was already a thing, so perhaps this is just a different and better automatic implementation.)
There’s also a new Arch system base and an updated graphics driver.
Perhaps most surprisingly, the “Non-Deck” section of the changelog is huge. Valve says long-pressing your power button should work “across a wide variety of devices” to power off, restart, or switch to the desktop mode. You should be able to change your processor’s power modes on the Xbox Ally now, and night mode and screen color settings should work on AMD Z2 Extreme handhelds in general.
There’s also “Greatly improved video memory management with discrete GPU platforms,” you can limit how far the battery charges in any of the Lenovo Legion Go handhelds (in desktop mode), and it should fix “washed out colors for Zotac and OneXPlayer handhelds with OLED.”
There’s a lot in this update, and it’s possible I missed a feature you care about, so check out the whole changelog here and below.
Technology
Fox News AI Newsletter: Wall-climbing robots swarm US Navy warships
Under the five-year contract, Gecko will begin work on 18 ships in the U.S. Pacific Fleet, with the initial award valued at up to $54 million. The contract vehicle is structured to allow other military services to access the technology as well. (Gecko Robotics )
NEWYou can now listen to Fox News articles!
Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.
IN TODAY’S NEWSLETTER:
– WATCH: Wall-climbing robot swarms crawl US Navy warships as China’s fleet surges
– OPINION: AI comes with a hefty charge, and you are the one who gets stuck with the bill
– Dell workforce shrinks 10% for third consecutive year
Swarms of wall-climbing robots will soon be crawling across U.S. Navy warships in a $71 million effort to slash repair delays and boost fleet readiness as China continues expanding its naval power. (Gecko Robotics )
TECH AT SEA: WATCH: wall-climbing robot swarms crawl US Navy warships as China’s fleet surges – Fox News Digital reports on a new development in naval technology, featuring wall-climbing robot swarms that are crawling on U.S. Navy warships. This advancement comes at a critical time in defense politics as China’s naval fleet continues to surge in size and capability.
WALLET SHOCK: OPINION: AI comes with a hefty charge, and you are the one who gets stuck with the bill – In this opinion piece, the author discusses the economic implications of the growing artificial intelligence industry. The article argues that the hefty costs associated with AI development and its massive energy infrastructure will ultimately be passed down, leaving everyday consumers to foot the bill.
Dell Technologies headquarters in Round Rock, Texas, US, on Sunday, Nov. 26, 2023. (Sergio Flores/Bloomberg via Getty Images)
COST CRUNCH: Dell workforce shrinks 10% for third consecutive year – Fox Business reports that Dell’s workforce has shrunk by ten percent. This marks the third consecutive year of workforce reductions for the major technology company amid shifting economic conditions and corporate restructuring.
AIMING HIGH: FULL AUTONOMY: AI pilot technology advances towards military capability – Merlin CEO Matt George details how the company is using artificial intelligence to enable military and commercial aircraft to operate fully autonomously on Fox Business’ ‘The Claman Countdown.’
Single family homes in a residential neighborhood in San Marcos, Texas, US, on Tuesday, March 12, 2024. (Photographer: Jordan Vonderhaar/Bloomberg via Getty Images)
SHOULD I BUY?: Homebuyers, sellers turning to AI chatbots for advice – Prairie Operating Co.’s Lou Basenese and real estate broker Kirsten Jordan discuss how artificial intelligence is impacting homebuyers and sellers on ‘Fox Business In Depth.’
DISRUPTION IS HERE: Charles Payne: AI disruption is here – Fox Business host Charles Payne discusses the economic impact of the rise in artificial intelligence on ‘Making Money.’
BUILDING HER BUSINESS: How Angie Hicks turned Angi into a home services giant and AI player – Angi co-founder Angie Hicks discusses entrepreneurship, company growth and how she built out her business on ‘Mornings with Maria.’
FOLLOW FOX NEWS ON SOCIAL MEDIA
Facebook
Instagram
YouTube
X
LinkedIn
SIGN UP FOR OUR OTHER NEWSLETTERS
Fox News First
Fox News Opinion
Fox News Lifestyle
Fox News Health
DOWNLOAD OUR APPS
Fox News
Fox Business
Fox Weather
Fox Sports
Tubi
WATCH FOX NEWS ONLINE
Fox News Go
STREAM FOX NATION
Fox Nation
Stay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.
Technology
A rogue AI led to a serious security incident at Meta
For almost two hours last week, Meta employees had unauthorized access to company and user data thanks to an AI agent that gave an employee inaccurate technical advice, as previously reported by The Information. Meta spokesperson Tracy Clayton said in a statement to The Verge that “no user data was mishandled” during the incident.
A Meta engineer was using an internal AI agent, which Clayton described as “similar in nature to OpenClaw within a secure development environment,” to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly.
An employee then acted on the AI’s advice, which “provided inaccurate information” that led to a “SEV1” level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.
According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information — and it’s not clear whether the employee who originally prompted the answer planned to post it publicly.
“The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee’s own reply on that thread,” Clayton commented to The Verge. “The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided.”
Last month, an AI agent from open source platform OpenClaw went more directly rogue at Meta when an employee asked it to sort through emails in her inbox, deleting emails without permission. The whole idea behind agents like OpenClaw is that they can take action on their own, but like any other AI model, they don’t always interpret prompts and instructions correctly or give accurate responses, a fact Meta employees have now discovered twice.
-
Oklahoma5 days agoFamily rallies around Oklahoma father after head-on crash
-
Detroit, MI1 day agoDrummer Brian Pastoria, longtime Detroit music advocate, dies at 68
-
Nebraska7 days agoWildfire forces immediate evacuation order for Farnam residents
-
Georgia4 days agoHow ICE plans for a detention warehouse pushed a Georgia town to fight back | CNN Politics
-
Massachusetts1 week agoMassachusetts community colleges to launch apprenticeship degree programs – The Boston Globe
-
Alaska5 days agoPolice looking for man considered ‘armed and dangerous’
-
Colorado1 week ago‘It’s Not a Penalty’: Bednar Rips Officials For MacKinnon Ejection | Colorado Hockey Now
-
Southwest1 week agoTalarico reportedly knew Colbert interview wouldn’t air on TV before he left to film it