Technology
How a researcher hacked ChatGPT's memory to expose a major security flaw
Company behind ChatGPT disbands AI safety board
Kurt ‘CyberGuy’ Knutsson discusses OpenAI ending its safety task force, actress Scarlett Johansson claiming the company copied her voice and the growing popularity of the voice notes phone feature.
ChatGPT is an amazing tool, and its developer, OpenAI, keeps adding new features from time to time.
Recently, the company introduced a new memory feature in ChatGPT, which essentially enables it to remember things about you. For example, it can recall your age, gender, philosophical beliefs and pretty much anything else.
These memories are meant to remain private, but a researcher recently demonstrated how ChatGPT’s artificial intelligence memory features can be manipulated, raising questions about privacy and security.
I’M GIVING AWAY A $500 GIFT CARD FOR THE HOLIDAYS
ChatGPT introduction screen. (Kurt “CyberGuy” Knutsson)
What is ChatGPT’s Memory feature?
ChatGPT’s memory feature is designed to make the chatbot more personal to you. It remembers information that might be useful for future conversations and tailors responses based on that information, even if you open a different chat. For example, if you mention that you’re vegetarian, the next time you ask for recipes, it will provide only vegetarian options.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
You can also train it to remember specific details about you, such as saying, “Remember that I like to watch classic movies.” In future interactions, it will tailor recommendations accordingly. You have control over ChatGPT’s memory. You can reset it, clear specific memories or all memories, or turn this feature off entirely in your settings.
A prompt on ChatGPT. (Kurt “CyberGuy” Knutsson)
WINDOWS FLAW LETS HACKERS SNEAK INTO YOUR PC OVER WI-FI
The security vulnerability in ChatGPT
As reported by Arstechnica, security researcher Johann Rehberger found that it’s possible to trick the AI into remembering false information through a method called indirect prompt injection. This means the AI can be manipulated into accepting instructions from unreliable sources like emails or blog posts.
For instance, Rehberger demonstrated that he could trick ChatGPT into believing a certain user was 102 years old, lived in a fictional place called the Matrix and thought the Earth was flat. After the AI accepts this made-up information, it will carry it over to all future chats with that user. These false memories could be implanted by using tools like Google Drive or Microsoft OneDrive to store files, upload images or even browse a site like Bing — all of which could be manipulated by a hacker.
Rehberger submitted a follow-up report that included a proof of concept, demonstrating how he could exploit the flaw in the ChatGPT app for macOS. He showed that by tricking the AI into opening a web link containing a malicious image, he could make it send everything a user typed and all the AI’s responses to a server he controlled. This meant that if an attacker could manipulate the AI in this way, they could monitor all conversations between the user and ChatGPT.
Rehberger’s proof-of-concept exploit demonstrated that the vulnerability could be used to exfiltrate all user input in perpetuity. The attack isn’t possible through the ChatGPT web interface, thanks to an API OpenAI rolled out last year. However, it was still possible through the ChatGPT app for macOS.
When Rehberger privately reported the finding to OpenAI in May, the company took it seriously and mitigated this issue by ensuring that the model doesn’t follow any links generated within its own responses, like those involving memory and similar features.
HOW TO REMOVE YOUR PRIVATE DATA FROM THE INTERNET
Johann Rehberger’s ChatGPT conversation. (Johann Rehberger)
CYBER SCAMMERS USE AI TO MANIPULATE GOOGLE SEARCH RESULTS
OpenAI’s response
After Rehberger shared his proof of concept, OpenAI engineers took action and released a patch to address this vulnerability. They released a new version of the ChatGPT macOS application (version 1.2024.247) that encrypts conversations and fixes the security flaw.
So, while OpenAI has taken steps to address the immediate security flaw, there are still potential vulnerabilities related to memory manipulation and the need for ongoing vigilance in using AI tools with memory features. The incident underscores the evolving nature of security challenges in AI systems.
The company says, “It’s important to note that prompt injection in large language models is an area of ongoing research. As new techniques emerge, we address them at the model layer via instruction hierarchy or application-layer defenses like the ones mentioned.”
How do I disable ChatGPT memory?
If you’re not cool with ChatGPT keeping stuff about you or the chance that it could let a bad actor access your data, you can just turn off this feature in the settings.
- Open the ChatGPT app or website on your computer or smartphone.
- Click on the profile icon in the top right corner of the screen.
- Go to Settings and then select Personalization.
- Switch the Memory option off, and you’re all set.
This disables ChatGPT’s ability to retain information between conversations, giving you full control over what it remembers or forgets.
A man using ChatGPT on his laptop (Kurt “CyberGuy” Knutsson)
DON’T LET SNOOPS NEARBY LISTEN TO YOUR VOICEMAIL WITH THIS QUICK TIP
Cybersecurity best practices: Protecting your data in the age of AI
As AI technologies like ChatGPT become more prevalent, it’s crucial to adhere to cybersecurity best practices to protect your personal information. Here are some tips for enhancing your cybersecurity:
1. Regularly review privacy settings: Stay informed about what data is being collected. Periodically check and adjust privacy settings on AI platforms like ChatGPT and others to ensure you’re only sharing information you’re comfortable with.
2. Be cautious about sharing sensitive information: Less is more when it comes to personal data. Avoid disclosing sensitive details such as your full name, address, or financial information in conversations with AI.
3. Use strong, unique passwords: Create passwords that are at least 12 characters long, combining letters, numbers, and symbols, and avoid reusing them across different accounts. Consider using a password manager to generate and store complex passwords.
4. Enable two-factor authentication (2FA): Add an extra layer of security to your ChatGPT and other AI accounts. By requiring a second form of verification, such as a text message code, you significantly reduce the risk of unauthorized access.
5. Keep software and applications up to date: Stay ahead of vulnerabilities. Regular updates often include security patches that protect against newly discovered threats, so enable automatic updates whenever possible.
6. Have strong antivirus software: In an age where AI is everywhere, protecting your data from cyber threats is more important than ever. Adding strong antivirus software to your devices adds a critical layer of protection. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2024 antivirus protection winners for your Windows, Mac, Android & iOS devices.
7. Regularly monitor your accounts: Catch issues early. Frequently check bank statements and online accounts for any unusual activity, which can help you identify potential breaches quickly.
Kurt’s key takeaways
As AI tools like ChatGPT get smarter and more personal, it’s pretty interesting to think about how they can tailor conversations to us. But, as Johann Rehberger’s findings remind us, there are some real risks involved, especially when it comes to privacy and security. While OpenAI is able to mitigate these issues as they arise, it also shows that we need to keep a close eye on how these features work. It’s all about finding that sweet spot between innovation and keeping our data safe.
What are your thoughts on AI remembering personal details—do you find it helpful, or does it raise privacy concerns for you? Let us know by writing us at Cyberguy.com/Contact
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter
Ask Kurt a question or let us know what stories you’d like us to cover.
Follow Kurt on his social channels: Answers to the most-asked CyberGuy questions:
New from Kurt:
Copyright 2024 CyberGuy.com. All rights reserved.
Technology
Xbox is now XBOX
Xbox just allcapsmaxxed: Meet XBOX. This isn’t a joke; Microsoft appears to be actually rebranding Xbox to XBOX. Asha Sharma, Xbox CEO, ran a poll on X earlier this week, asking fans whether Microsoft should use Xbox or XBOX. The results were in favor of XBOX, and the company has now renamed its X account.
Curiously, the Threads and Bluesky accounts for Xbox haven’t been renamed yet, but if Microsoft is going ahead with a rebranding then I expect those will change soon. I asked Microsoft to comment on this potential Xbox rebranding and the company simply referred me to Sharma’s post.
The use of all caps for Xbox is a return to original form, though. Microsoft’s first Xbox logo for its console was all caps, and the company has favored using similar capped versions for the Xbox 360, Xbox One, and Xbox Series X / S console logos.
The apparent rebranding comes just a few weeks after Sharma scrapped Microsoft Gaming and renamed Microsoft’s gaming division back to Xbox. It’s part of Sharma’s continued promise of a “return of Xbox,” which has involved fan-focused console updates, a new Xbox logo, Game Pass pricing changes, and lots more in recent weeks.
Technology
AI data centers may soon ride ocean waves
NEWYou can now listen to Fox News articles!
Artificial intelligence (AI) already shows up in your phone, your searches and plenty of apps you use every day. Now, some Silicon Valley investors are betting the machines behind those AI answers could one day run at sea.
A company called Panthalassa has raised $140 million in new funding to develop and deploy autonomous, floating AI computing nodes powered by ocean waves. The Series B round brings Panthalassa’s total funding to $210 million, a sign that investors are taking this ocean-based AI idea seriously. The round was led by Peter Thiel, the Palantir co-founder, and the company says the money will help complete a pilot manufacturing facility near Portland, Oregon. Panthalassa also plans to deploy its Ocean-3 pilot node series in the northern Pacific Ocean later in 2026.
Instead of building another giant AI data center on land, Panthalassa wants to place computing power out at sea. Ocean waves would generate electricity. Seawater would help with cooling. Onboard computing systems would process AI prompts and send the results back to land through low-Earth-orbit satellites.
Sign up for my FREE CyberGuy Report
LOWERING YOUR ELECTRIC BILL COULD BE FLOATING IN THE OCEAN
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
META BUILDS WORLD’S LARGEST AI SUPERCLUSTERS FOR THE FUTURE
Panthalassa’s Ocean-2 prototype rides in open water during testing, giving a real-world look at the kind of floating wave-energy system behind the company’s ocean AI plan. (Panthalassa)
How AI data centers at sea could work
Panthalassa’s floating nodes are designed to capture wave motion and turn it into electricity. The company says it has spent a decade developing the technology behind its power generation, onboard computing and autonomous ocean operations. Its earlier Ocean-1, Ocean-2 and Wavehopper prototypes were tested in 2021 and 2024. Think of each node like a floating power station with AI hardware inside. Waves move the system. That motion helps drive a generator. The power then feeds the onboard chips.
WHY AI IS CAUSING SUMMER ELECTRICITY BILLS TO SOAR
The company’s plan is to use those chips for AI inference. That is the part of AI where a model responds to your prompt after it has already been trained. In simple terms, it is what happens when you ask a chatbot a question and get an answer back. That makes the ocean plan a little easier to understand. Training massive AI models requires huge data movement and tight coordination. Answering prompts may be more realistic for a floating node, at least in some situations.
Why AI data centers are moving offshore
AI data centers need huge amounts of electricity. They also need space, cooling systems and local support from communities that may not want a massive facility nearby. Those problems have pushed companies to look for unusual answers. Ocean-based computing is one of them.
Panthalassa says its nodes would operate far from shore in wave-rich parts of the ocean. The goal is to use that wave energy directly onboard instead of sending the power back to land. “We’ve built a technology platform that operates in the planet’s most energy-dense wave regions, far from shore, and turns that resource into reliable clean power,” said Garth Sheldon-Coulson, Panthalassa’s co-founder and CEO.
A SUPERCOMPUTER CHIP GOING TO SPACE COULD CHANGE LIFE ON EARTH
The ocean also offers cold surrounding water. That could help cool the chips onboard. Cooling is a major issue because data centers produce a lot of heat. Panthalassa is taking a different path from traditional land-based data centers. Instead of pulling more power from the grid, it wants floating nodes that generate their own electricity from waves.
A SUPERCOMPUTER CHIP GOING TO SPACE COULD CHANGE LIFE ON EARTH
The Ocean-2 prototype sits inside a coastal facility, showing the size and shape of Panthalassa’s floating node before deployment at sea. (Panthalassa)
The satellite problem for ocean AI data centers
The ocean may help with power and cooling, but it creates another problem: connection. Traditional data centers rely on high-capacity fiber-optic connections because they need to move huge amounts of data fast. A floating node far out at sea may depend on low-Earth-orbit satellite links. That can work for some AI responses, but it may be slower and more limited than fiber.
SOLAR DEVICE TRANSFORMS USED TIRES TO HELP PURIFY WATER SO THAT IT’S DRINKABLE
The challenge grows when multiple nodes need to work together. AI systems often depend on fast communication between chips, servers and storage. If those parts are floating in the ocean and talking by satellite, coordination gets harder. That means AI data centers at sea may not replace land-based data centers anytime soon. They may be better suited for certain AI tasks where the model can live onboard, and the response does not require constant back-and-forth with other machines.
Repairing floating AI nodes could be difficult
There is another practical question: What happens when something breaks? A land-based data center can send in technicians. A floating AI node in rough seas may need a ship, special equipment and the right weather window. That adds cost and delay.
Panthalassa says it is developing autonomous systems meant for harsh ocean conditions. Its press release says Ocean-3 testing is meant to demonstrate AI inference and refine manufacturing before commercial deployments in 2027. Still, the ocean is brutal. Saltwater eats away at equipment. Storms can turn a routine repair into a major operation. Constant motion also puts stress on the hardware. For this plan to work, Panthalassa will have to show that each node can keep running for years in harsh ocean conditions without frequent human repairs.
WHY AI IS CAUSING SUMMER ELECTRICITY BILLS TO SOAR
Panthalassa’s Ocean-2 prototype is transported by barge, a reminder that building AI infrastructure at sea also means solving major deployment and maintenance challenges. (Panthalassa)
Ocean data centers have been tested before
Ocean data centers are not new. Microsoft experimented with underwater data center servers through Project Natick, including tests in 2015 and 2018. Those tests showed that sealed underwater servers could run reliably while using seawater for cooling, with Microsoft reporting a lower failure rate than comparable land-based systems. Microsoft later ended the project.
Chinese companies have also reportedly pushed ahead with underwater data center projects near Hainan and Shanghai. Keppel has explored floating data center designs in Singapore, where land constraints make the concept especially attractive. Panthalassa’s plan goes in a different direction. It combines wave power with onboard AI chips and satellite-based results. It also depends on floating nodes that would need to operate far from the kind of support a normal data center gets. That is why the idea is getting attention. It is also why skepticism is fair.
FOX NEWS AI NEWSLETTER: SCAMMERS CAN EXPLOIT YOUR DATA FROM JUST 1 CHATGPT SEARCH
What AI data centers at sea mean for you
For now, this will not change how your phone or computer works. You will not suddenly see a “powered by ocean waves” label on your favorite AI app. But the bigger picture affects everyone. AI needs an incredible amount of electricity. As more companies add AI tools to their products, they need more places to run those systems. That pressure can affect energy grids, water use, local battles over new data centers and even your utility bills over time.
Panthalassa argues its approach could reduce the need for new data centers and power plants on land. That could ease pressure on local communities and the grid, but the company still has to prove the system can work reliably at sea. If ocean-based AI moves beyond testing, it could also raise fresh questions about marine maintenance, environmental oversight and who controls computing infrastructure in international waters.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
Everyone is using AI on their phones and computers these days, but the heavy lifting often happens in huge data centers behind the scenes. That is why Panthalassa’s ocean plan is getting attention. The company wants to use waves for power and seawater for cooling. The hard part is proving that floating AI nodes can survive rough seas, limited satellite links and complicated maintenance. If Panthalassa can pull it off, ocean-based AI could become part of the tech we use every day. If it cannot, it may show just how difficult it is to keep feeding AI’s growing demand for power.
If this kind of ocean-powered AI takes off, would you worry about what these floating nodes could mean for our oceans? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
OpenAI keeps shuffling its executives in bid to win AI agent battle
OpenAI announced yet another reorganization Friday, consolidating certain areas and making company president Greg Brockman the official lead of all things product.
In a memo viewed by The Verge, Brockman wrote that since OpenAI’s product strategy for this year is to go all-in on AI agents, the company is combining its products to “invest in a single agentic platform and to merge ChatGPT and Codex into one unified agentic experience for all.”
To do this, the company is making a suite of org chart changes, although it’s still operating under some of the same ones from last month. That’s when AGI boss Fidji Simo went on medical leave and OpenAI announced that Brockman would be in charge of product strategy and CSO Jason Kwon, CFO Sarah Friar, and CRO Denise Dresser would take control of business operations.
It’s all part of OpenAI’s recent strategic shift to focus on key revenue drivers like coding and enterprise and stop pouring resources into “side quests” ahead of its potential IPO later this year and amid investor pressure to turn a profit.
In Simo’s continued absence, Brockman’s role leading product strategy is now official, as well as the company’s “scaling” arm. Under Brockman will be four different pillars. The first is core product and platform, led by Thibault Sottiaux, who has been OpenAI’s engineering lead for Codex, and the second is critical enterprise industries, led by ChatGPT head Nick Turley. Third is the consumer pillar, such as health, commerce, and personal finance, which will be led by Ashley Alexander, who has been its healthcare products VP. The fourth pillar — core infrastructure, ads, data science, and growth — will be led by Vijaye Raji, who has been OpenAI’s CTO of applications.
Brockman wrote in the memo that OpenAI’s goal is now to “bring agents to ChatGPT scale, in order to give individuals and organizations significantly more value and utility from our products.”
-
Fitness11 seconds ago‘I’m a neuroscientist – these are the 3 best workouts for slowing cognitive decline’
-
Movie Reviews12 minutes agoKaruppu (Veerabhadrudu) Movie Review – Gulte
-
World24 minutes ago
Supreme Court rejects Virginia’s bid to restore congressional map favoring Democrats
-
News30 minutes agoTop Drug Regulator Is Fired From the F.D.A.
-
Politics36 minutes agoVideo: Why Were These C.E.O.s in Beijing With Trump?
-
Business42 minutes agoWhat Trump Gained, and Didn’t, From China
-
Health54 minutes agoMicro-Walking Plan for Weight Loss: Harvard Doctor Calls It a ‘Wonder Drug’
-
Culture1 hour agoEllen Burstyn on Her Favorite Books and Her Love of Poetry