Technology
OpenAI admits AI browsers face unsolvable prompt attacks
NEWYou can now listen to Fox News articles!
Cybercriminals don’t always need malware or exploits to break into systems anymore. Sometimes, they just need the right words in the right place. OpenAI is now openly acknowledging that reality. The company says prompt injection attacks against artificial intelligence (AI)-powered browsers are not a bug that can be fully patched, but a long-term risk that comes with letting AI agents roam the open web. This raises uncomfortable questions about how safe these tools really are, especially as they gain more autonomy and access to your data.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY
AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions attackers can slip into pages or documents. (Kurt “CyberGuy” Knutsson)
Why prompt injection isn’t going away
In a recent blog post, OpenAI admitted that prompt injection attacks are unlikely to ever be completely eliminated. Prompt injection works by hiding instructions inside web pages, documents or emails in ways that humans don’t notice, but AI agents do. Once the AI reads that content, it can be tricked into following malicious instructions.
OpenAI compared this problem to scams and social engineering. You can reduce them, but you can’t make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more an AI can do on your behalf, the more damage it can cause when something goes wrong.
OpenAI launched the ChatGPT Atlas browser in October, and security researchers immediately started testing its limits. Within hours, demos appeared showing that a few carefully placed words inside a Google Doc could influence how the browser behaved. That same day, Brave published its own warning, explaining that indirect prompt injection is a structural problem for AI-powered browsers, including tools like Perplexity’s Comet.
This isn’t just OpenAI’s problem. Earlier this month, the National Cyber Security Centre in the U.K. warned that prompt injection attacks against generative AI systems may never be fully mitigated.
FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE
Prompt injection attacks exploit trust at scale, allowing malicious instructions to influence what an AI agent does without the user ever seeing it. (Kurt “CyberGuy” Knutsson)
The risk trade-off with AI browsers
OpenAI says it views prompt injection as a long-term security challenge that requires constant pressure, not a one-time fix. Its approach relies on faster patch cycles, continuous testing, and layered defenses. That puts it broadly in line with rivals like Anthropic and Google, which have both argued that agentic systems need architectural controls and ongoing stress testing.
Where OpenAI is taking a different approach is with something it calls an “LLM-based automated attacker.” In simple terms, OpenAI trained an AI to act like a hacker. Using reinforcement learning, this attacker bot looks for ways to sneak malicious instructions into an AI agent’s workflow.
The bot runs attacks in simulation first. It predicts how the target AI would reason, what steps it would take and where it might fail. Based on that feedback, it refines the attack and tries again. Because this system has insight into the AI’s internal decision-making, OpenAI believes it can surface weaknesses faster than real-world attackers.
Even with these defenses, AI browsers aren’t safe. They combine two things attackers love: autonomy and access. Unlike regular browsers, they don’t just display information, but also read emails, scan documents, click links and take actions on your behalf. That means a single malicious prompt hidden in a webpage, document or message can influence what the AI does without you ever seeing it. Even when safeguards are in place, these agents operate by trusting content at scale, and that trust can be manipulated.
THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS
As AI browsers gain more autonomy and access to personal data, limiting permissions and keeping human confirmation in the loop becomes critical for safety. (Kurt “CyberGuy” Knutsson)
7 steps you can take to reduce risk with AI browsers
You may not be able to eliminate prompt injection attacks, but you can significantly limit their impact by changing how you use AI tools.
1) Limit what the AI browser can access
Only give an AI browser access to what it absolutely needs. Avoid connecting your primary email account, cloud storage or payment methods unless there’s a clear reason. The more data an AI can see, the more valuable it becomes to attackers. Limiting access reduces the blast radius if something goes wrong.
2) Require confirmation for every sensitive action
Never allow an AI browser to send emails, make purchases or modify account settings without asking you first. Confirmation breaks long attack chains and gives you a moment to spot suspicious behavior. Many prompt injection attacks rely on the AI acting quietly in the background without user review.
3) Use a password manager for all accounts
A password manager ensures every account has a unique, strong password. If an AI browser or malicious page leaks one credential, attackers can’t reuse it elsewhere. Many password managers also refuse to autofill on unfamiliar or suspicious sites, which can alert you that something isn’t right before you manually enter anything.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com
4) Run strong antivirus software on your device
Even if an attack starts inside the browser, antivirus software can still detect suspicious scripts, unauthorized system changes or malicious network activity. Strong antivirus software focuses on behavior, not just files, which is critical when dealing with AI-driven or script-based attacks.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
5) Avoid broad or open-ended instructions
Telling an AI browser to “handle whatever is needed” gives attackers room to manipulate it through hidden prompts. Be specific about what the AI is allowed to do and what it should never do. Narrow instructions make it harder for malicious content to influence the agent.
6) Be careful with AI summaries and automated scans
When an AI browser scans emails, documents or web pages for you, remember that hidden instructions can live inside that content. Treat AI-generated actions as drafts or suggestions, not final decisions. Review anything the AI plans to act on before approving it.
7) Keep your browser, AI tools and operating system updated
Security fixes for AI browsers evolve quickly as new attack techniques emerge. Delaying updates leaves known weaknesses open longer than necessary. Turning on automatic updates ensures you get protection as soon as they’re available, even if you miss the announcement.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaway
There’s been a meteoric rise in AI browsers. We’re now seeing them from major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia, and Perplexity’s Comet. Even existing browsers like Chrome and Edge are pushing hard to add AI and agentic features into their current infrastructure. While these browsers can be useful, the technology is still early. It’s best not to fall for the hype and to wait for it to mature.
Do you think AI browsers are worth the risk today, or are they moving faster than security can keep up? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Halide co-founder is suing former partner Sebastiaan de With for taking source code to Apple
Lux Optics co-founder Sebastiaan de With made headlines when he joined Apple in late January. The company was behind Halide, one of the most popular photography apps for the iPhone, which gained a cult following for its robust pro-level controls.
Apple was apparently a big enough fan that it tried to acquire the developer last summer. Those talks never bore fruit, and eventually the company simply hired de With. At the time, it was widely believed that Apple had poached him from Lux. But new allegations from a lawsuit filed by co-founder Ben Sandofsky in the California Superior Court of Santa Cruz claim de With was fired for financial misconduct in December of 2025.
According to The Information, the suit “accuses de With of improperly using more than $150,000 in Lux corporate funds to pay for personal expenses,” as well as “taking Lux source code and confidential material with him when he joined Apple.”
An attorney for de With denied those claims and said that “The attempt to insert Apple into this dispute appears designed to create leverage and attract attention.“
Technology
Creepy robot mom that gives birth is training future midwives
NEWYou can now listen to Fox News articles!
Most hospital training labs use basic dummies or simple mannequins to teach medical skills. Students practice procedures, learn techniques and move on to real patients later. But a new childbirth simulator called Mama Anne takes training to a very different level. This lifelike robot blinks, breathes and even talks while helping midwifery students practice delivering babies before they ever step into a real delivery room. And if the idea of a robot going into labor feels a little creepy, you are not alone.
At York St. John University in York, England, educators have introduced the simulator as part of a new approach to hands-on medical training. The technology allows students to experience complex labor scenarios in a safe environment where mistakes become learning moments instead of medical emergencies. And yes, the robot actually gives birth.
Sign up for my FREE CyberGuy Report. Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
ROBOTS POWER BREAKTHROUGH IN PREGNANCY RESEARCH, BOOSTING IVF SUCCESS RATES
Mama Anne is a high-fidelity childbirth simulator used to train midwifery students in realistic labor and delivery scenarios before they work with real patients. (Laerdal Medical)
How the robot childbirth simulator trains future midwives
The simulator known as Mama Anne looks and behaves much like a real patient in labor. Developed by Laerdal Medical, the high-fidelity mannequin was designed to recreate real childbirth conditions with startling realism.
Students interact with Mama Anne as if she were an actual patient. Her eyes blink and react to light. Her chest rises and falls as she breathes. She even has pulses that can be felt in multiple places across the body. Most importantly, she can deliver a baby mannequin during a simulated birth.
Unlike older training models that stayed mostly static, this simulator moves and reacts during labor. It can deliver in several positions, including lying back or on all fours. It can also display vital signs that change in response to medical complications. In short, it turns a classroom exercise into something that feels much closer to a real hospital scenario.
Why robot childbirth simulators are becoming essential
For decades, midwifery training relied heavily on textbooks, observation and limited hands-on practice. That approach left a major gap. Many students encountered their first true emergencies only after they began working in clinical settings.
Now technology is filling that gap. Simulation tools like Mama Anne allow students to practice high-risk situations repeatedly before they ever treat a real patient. As a result, students build confidence while instructors guide them through difficult scenarios.
For example, the simulator can recreate several dangerous childbirth complications, including:
- Postpartum hemorrhage with realistic blood loss
- Shoulder dystocia when a baby becomes stuck during delivery
- Pre-eclampsia and eclampsia with changing vital signs
- Sepsis symptoms that require rapid treatment
Students also practice everyday clinical skills such as monitoring fetal heart rate, giving injections and managing labor from start to finish. Because the training environment is controlled, instructors can pause a scenario, explain a mistake and run it again.
The robot even teaches communication skills
Medical training is not only about technical procedures. Communication with patients matters just as much. Mama Anne helps with that, too.
The simulator can speak using recorded responses or real-time dialogue through hidden speakers. Students must explain procedures, ask for consent and reassure their patient just as they would in a real delivery room.
If someone touches the simulator without asking first, it can react and vocalize discomfort. That feature reinforces one of the most important lessons in modern healthcare: patient consent and respectful care always come first.
REMOTE ROBOT SURGERY REMOVES CANCER 1,500 MILES AWAY
The lifelike simulator can blink, breathe, display vital signs and deliver a baby mannequin to recreate complex childbirth situations. (Laerdal Medical)
Why universities are investing in this technology
Educators believe simulation training dramatically improves how healthcare students prepare for the real world. Rebecca Beggan, midwifery program lead at York St. John University, says hands-on simulation helps students build both competence and confidence before clinical placements.
Students can experience an entire labor scenario from beginning to end. They learn antenatal care, labor management and postnatal care in a single immersive exercise. Instructors also say the technology helps protect students from the emotional shock of encountering their first medical emergency without preparation. Instead of facing those situations cold, students enter clinical placements with real practice under their belt.
The future of childbirth training
The arrival of hyper-realistic simulators like Mama Anne suggests medical education is entering a new era. Instead of learning mostly through observation and experience, future healthcare professionals may train through realistic simulations that mirror real hospital conditions.
That shift could change everything from how nurses train to how surgeons rehearse complex procedures. Technology will never replace human caregivers. However, it can help prepare them better than ever before.
What this means to you
Even if you never step into a medical classroom, this technology could still affect your life. Better training often leads to better patient outcomes. When healthcare providers practice emergency scenarios in advance, they react faster and make fewer mistakes during real emergencies.
For expectant parents, that can mean safer deliveries and more confident medical teams in the room. Simulation training also reflects a broader shift in healthcare education across the United States. Many hospitals and universities are adopting high-fidelity simulators for surgery, emergency care and trauma response. The goal is simple: Let students practice difficult situations before lives are on the line.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
A robot that gives birth may seem a little creepy at first. Still, tools like this could become common in medical training down the road. Students gain hands-on experience. Instructors guide them through emergencies. Patients benefit from better-prepared medical teams. The next generation of midwives may enter the delivery room with far more practice than any class before them. As medical simulators grow more realistic and more widespread, one question naturally follows.
Students use the simulator to practice emergencies like postpartum hemorrhage, shoulder dystocia and other complications in a safe training environment. (Laerdal Medical)
If robots can train doctors to deliver babies today, what other parts of healthcare might soon be practiced first in simulation labs instead of hospitals? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report. Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
The AirPods Pro 3 are $50 off right now, nearly matching their best-ever price
Less than a week ago, Apple announced the forthcoming AirPods Max 2, a pair of over-ear headphones that leverage the company’s H2 chip for AI-powered live translation, conversation awareness, and a host of newer features. However, if you’re okay with a pair of earbuds, the AirPods Pro 3 offer access to all the same features for less — especially given they’re currently on sale at Amazon, Walmart, and Best Buy for $199.99 ($50 off), matching their second-best price to date.
For iPhone owners, nothing else really compares to the AirPods Pro 3. Apple’s latest pair of premium earbuds deliver the best active noise cancellation and richest sound of any AirPods model to date, combined with a more comfortable, angled design that fits securely and naturally in your ear canal. They also feature a new XXS ear tip size and a more robust IP57 rating for sweat and water resistance, making them better suited for long-distance runs and various gym activities.
Speaking of workouts, the Pro 3 can also pull double duty as a fitness tracker, thanks to a built-in heart rate sensor that works with Apple’s Fitness app to track calories burned across more than 50 workout types. It’s a welcome addition if you don’t use an Apple Watch; however, it may not be as useful for those who already own and rely on Apple’s wearable for its health tracking and wellness features.
Lastly, as mentioned up top, the AirPods Pro 3 also boast an H2 chip, allowing for the aforementioned real-time translation features and Apple’s newer Voice Isolation tech, which uses machine learning to isolate and enhance voice quality by removing unwanted background noise. That’s on top of their seamless integration with other Apple devices, mind you, which lets you take advantage of automatic device switching and a Find My-compatible charging case.
-
Detroit, MI4 days agoDrummer Brian Pastoria, longtime Detroit music advocate, dies at 68
-
Oklahoma1 week agoFamily rallies around Oklahoma father after head-on crash
-
Nebraska1 week agoWildfire forces immediate evacuation order for Farnam residents
-
Georgia6 days agoHow ICE plans for a detention warehouse pushed a Georgia town to fight back | CNN Politics
-
Alaska1 week agoPolice looking for man considered ‘armed and dangerous’
-
Science1 week agoFederal EPA moves to roll back recent limits on ethylene oxide, a carcinogen
-
Science1 week agoH5N1 bird flu spreads to sea otters and sea lions along San Mateo coast, wildlife experts say
-
Movie Reviews3 days ago‘Youth’ Twitter review: Ken Karunaas impresses audiences; Suraj Venjaramoodu adds charm; music wins praise | – The Times of India