Technology
Third-party breach exposes ChatGPT account details
NEWYou can now listen to Fox News articles!
ChatGPT went from novelty to necessity in less than two years. It is now part of how you work, learn, write, code and search. OpenAI has said the service has roughly 800 million weekly active users, which puts it in the same weight class as the biggest consumer platforms in the world.
When a tool becomes that central to your daily life, you assume the people running it can keep your data safe. That trust took a hit recently after OpenAI confirmed that personal information linked to API accounts had been exposed in a breach involving one of its third-party partners.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
The breach highlights how even trusted analytics partners can expose sensitive account details. (Kurt “CyberGuy” Knutsson)
What you need to know about the ChatGPT breach
OpenAI’s notification email places the breach squarely on Mixpanel, a major analytics provider the company used on its API platform. The email stresses that OpenAI’s own systems were not breached. No chat histories, billing information, passwords or API keys were exposed. Instead, the stolen data came from Mixpanel’s environment and included names, email addresses, Organization IDs, coarse location and technical metadata from user browsers.
FAKE CHATGPT APPS ARE HIJACKING YOUR PHONE WITHOUT YOU KNOWING
That sounds harmless on the surface. The email calls this “limited” analytics data, but the label feels like PR cushioning more than anything else. For attackers, this kind of metadata is gold. A dataset that reveals who you are, where you work, what machine you use and how your account is structured gives threat actors everything they need to run targeted phishing and impersonation campaigns.
The biggest red flag is the exposure of Organization IDs. Anyone who builds on the OpenAI API knows how sensitive these identifiers are. They sit at the center of internal billing, usage limits, account hierarchy and support workflows. If an attacker quotes your Org ID during a fake billing alert or support request, it suddenly becomes very hard to dismiss the message as a scam.
OpenAI’s own reconstructed timeline raises bigger questions. Mixpanel first detected a smishing attack on November 8. Attackers accessed internal systems the next day and exported OpenAI’s data. That data was gone for more than two weeks before Mixpanel told OpenAI on November 25. Only then did OpenAI alert everyone. It is a long and worrying silent period, and it left API users exposed to targeted attacks without even knowing they were at risk. OpenAI says it cut Mixpanel off the next day.
The size of the risk and the policy problem behind it
The timing and the scale matter here. ChatGPT sits at the center of the generative AI boom. It does not just have consumer traffic. It has sensitive conversations from developers, employees, startups and enterprises. Even though the breach affected API accounts rather than consumer chat history, the exposure still highlights a wider issue. When a platform reaches almost a billion weekly users, any crack becomes a national-scale problem.
Regulators have been warning about this exact scenario. Vendor security is one of the weak links in modern tech policy. Data protection laws tend to focus on what a company does with the information you give them. They rarely provide strong guardrails around the entire chain of third-party services that process this data along the way. Mixpanel is not an obscure operator. It is a widely used analytics platform trusted by thousands of companies. Yet it still lost a dataset that should never have been accessible to an attacker.
Companies should treat analytics providers the same way they treat core infrastructure. If you cannot guarantee that your vendors follow the same security standards you do, you should not be collecting the data in the first place. For a platform as influential as ChatGPT, the responsibility is even higher. People do not fully understand how many invisible services sit behind a single AI query. They trust the brand they interact with, not the long list of partners behind it.
Attackers can use leaked metadata to craft convincing phishing emails that look legitimate. (Jaap Arriens/NurPhoto via Getty Images)
8 steps you can take to stay safer when using AI tools
If you rely on AI tools every day, it’s worth tightening your personal security before your data ends up floating around in someone else’s analytics dashboard. You cannot control how every vendor handles your information, but you can make it much harder for attackers to target you.
1) Use strong, unique passwords
Treat every AI account as if it holds something valuable because it does. Long, unique passwords stored in a reliable password manager reduce the fallout if one platform gets breached. This also protects you from credential stuffing, where attackers try the same password across multiple services.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
2) Turn on phishing-resistant 2FA
AI platforms have become prime targets, so they rely on stronger 2FA. Use an authenticator app or a hardware security key. SMS codes can be intercepted or redirected, which makes them unreliable during large-scale phishing campaigns.
3) Use strong antivirus software
Another important step you can take to protect yourself from phishing attacks is to install strong antivirus software on your devices. This can also alert you to phishing emails and ransomware scams, helping you keep your personal information and digital assets safe.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
PARENTS BLAME CHATGPT FOR SON’S SUICIDE, LAWSUIT ALLEGES OPENAI WEAKENED SAFEGUARDS TWICE BEFORE TEEN’S DEATH
4) Limit what personal or sensitive data you share
Think twice before pasting private conversations, company documents, medical notes or addresses into a chat window. Many AI tools store recent history for model improvements unless you opt out, and some route data through external vendors. Anything you paste could live on longer than you expect.
5) Use a data-removal service to shrink your online footprint
Attackers often combine leaked metadata with information they pull from people-search sites and old listings. A good data-removal service scans the web for exposed personal details and submits removal requests on your behalf. Some services even let you send custom links for takedowns. Cleaning up these traces makes targeted phishing and impersonation attacks much harder to pull off.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
6) Treat unexpected support messages with suspicion
Attackers know users panic when they hear about API limits, billing failures or account verification issues. If you get an email claiming to be from an AI provider, do not click the link. Open the site manually or use the official app to confirm whether the alert is real.
Events like this show why strengthening your personal security habits matters more than ever. (Kurt “CyberGuy” Knutsson)
7) Keep your devices and software updated
A lot of attacks succeed because devices run outdated operating systems or browsers. Regular updates close vulnerabilities that could be used to steal session tokens, capture keystrokes or hijack login flows. Updates are boring, but they prevent a surprising amount of trouble.
8) Delete accounts you no longer need
Old accounts sit around with old passwords and old data, and they become easy targets. If you’re not actively using a particular AI tool anymore, delete it from your account list and remove any saved information. It reduces your exposure and limits how many databases contain your details.
Kurt’s key takeaway
This breach may not have touched chat logs or payment details, but it shows how fragile the wider AI ecosystem can be. Your data is only as safe as the least secure partner in the chain. With ChatGPT now approaching a billion weekly users, that chain needs tighter rules, better oversight and fewer blind spots. If anything, this should be a reminder that the rush toward AI adoption needs stronger policy guardrails. Companies cannot hide behind transparent emails after the fact. They need to prove that the tools you rely on every day are secure at every layer, including the ones you never see.
Do you trust AI platforms with your personal information? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
AO3 is finally out of beta after 17 years
Archive of Our Own (AO3) is officially exiting beta. The Organization for Transformative Works — the nonprofit behind the fanfiction site — announced the update on Thursday, which comes 17 years after AO3’s launch in 2009.
“Since 2009, AO3 has grown and changed a lot,” the announcement says. “We’ve introduced many features over the years through the efforts of our volunteers and coding contributors, as well as the contractors we’ve been able to hire thanks to generous donations from our users.”
The post highlights some of the features that AO3 has since its launch, including a tagging system, fanworks downloads, privacy settings that allow creators to limit access to their work, and more. Just because AO3 is exiting beta, doesn’t mean the updates will stop flowing:
As the AO3 software has been stable for a long time, the change is mostly cosmetic and does not indicate that everything is finalized or perfectly working. Exiting beta doesn’t mean we’ll stop continuing to improve AO3—our volunteer coders and community contributors will still be working to add to and improve AO3 every day.
One of the most significant changes to the site is the absence of the tiny “beta” label inside the AO3 logo displayed at the top of the platform. (AO3 briefly changed the beta to “omega” for April Fools’ Day this year).
You can keep tabs on the updates coming to AO3 by viewing its projects on Jira
Technology
US targets Chinese robots over security fears
NEWYou can now listen to Fox News articles!
A bipartisan group of lawmakers wants to draw a clear line on where certain robots may operate in the United States. Senators Tom Cotton (R-Ark.) and Chuck Schumer (D-N.Y.) recently introduced legislation that would ban the federal government from using robots made by foreign adversaries, a category that includes China but can also apply to other designated countries.
The proposal, called the American Security Robotics Act, targets unmanned ground systems. That includes humanoid robots and remote-controlled surveillance machines. The concern is not just what these robots can do. It is what they could be doing behind the scenes. Lawmakers say these systems are already being marketed to U.S. research labs, universities, law enforcement agencies and even consumers.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
WHITE HOUSE UNVEILS ITS FIRST FEDERAL AI FRAMEWORK, PUSHES CONGRESS TO ACT ‘THIS YEAR’
Advanced humanoid robots like this from Unitree Robotics highlight how quickly the technology is evolving and why officials are raising data security concerns. (Unitree)
Why lawmakers say these robots pose a risk
According to statements from the lawmakers involved, the core issue is security. Schumer warned that Chinese robotics companies could embed hidden access points inside their systems. These so-called backdoors could allow unauthorized access to sensitive data or even enable remote control. Lawmakers warn that these systems could include hidden access points or be vulnerable to remote control. Schumer said, “The Chinese Communist Party has shown that they are willing to lie and cheat to get ahead at the expense of the American people and our national security. They are running their standard playbook, this time in robotics, trying to flood the U.S. market with their technology, which presents real security risks and threats to Americans’ privacy and American research and industry.”
He said the Chinese government has a track record of prioritizing its own strategic goals over transparency, raising concerns about how that approach could extend into robotics.
TOP AI FIRM ALLEGES CHINESE LABS USED 24K FAKE ACCOUNTS TO SIPHON US TECH
A humanoid robot from Unitree Robotics, similar to the systems lawmakers are scrutinizing over potential security risks in government use. (Unitree)
What the bill would actually do
The American Security Robotics Act focuses specifically on federal use. The bill targets countries designated as foreign adversaries, including Communist China, according to the lawmakers.
The legislation targets “unmanned ground vehicle systems,” including humanoid robots and autonomous patrol technologies used by federal agencies. If passed, it would block U.S. government agencies from purchasing or operating unmanned ground vehicles built by companies tied to foreign adversaries. That includes:
- Humanoid robots used in public-facing roles
- Remote surveillance robots
- Other automated ground systems used in government operations
It also blocks federal agencies from using these systems through contractors or funding their use through grants or agreements. Cotton said, “Robots made by Communist China threaten Americans’ privacy and our national security. Our bill will ban the federal government from buying and operating these devices made in countries that wish us harm.”
The operational ban would take effect one year after the law is enacted. The bill includes exceptions for national security, research, testing and certain law enforcement or intelligence activities under strict conditions.
The bill does not ban these products outright for consumers or private companies. Instead, it draws a boundary around government adoption where sensitive data and infrastructure are involved. Meanwhile, Rep. Elise Stefanik (R-N.Y.) is introducing a companion bill in the House, signaling coordinated support across both chambers of Congress.
The timing matters as robotics competition heats up
This legislation comes at a moment when China is rapidly advancing in robotics. Recent demonstrations in Beijing showcased a new generation of highly capable robots, highlighting how quickly the technology is evolving. That momentum has raised alarms in Washington about falling behind while also importing potential risks. Stefanik said, “We must continue to promote and propel America’s robotics superiority while safeguarding our privacy and national security from adversaries.“
At the same time, U.S. companies are pushing forward. One example came when a humanoid robot from Figure AI recently appeared at a White House education summit alongside First Lady Melania Trump. She suggested robots like these could eventually play a role in education, hinting at how deeply this technology could integrate into everyday life.
DEMOCRATS WARN TRUMP GREEN-LIGHTING NVIDIA AI CHIP SALES COULD BOOST CHINA’S MILITARY EDGE
Multiple humanoid robots developed by Unitree Robotics show the growing capabilities of foreign-made systems now entering global markets. (Unitree)
What this means for you
If you are not working inside the federal government, this bill will not directly affect what you can buy or use. Still, it signals something bigger. It does raise questions about how much access foreign-made devices could have to data inside your home or workplace. First, it shows that robotics is no longer just about convenience or innovation. It is now part of national security conversations. Second, it highlights growing concern about where your data goes when you interact with connected devices. That applies whether it is a robot, a smart home device or a surveillance system. Finally, it suggests that future restrictions could expand beyond government use if risks are confirmed or public concern grows.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
This is not just about robots walking through offices or classrooms. It is about trust. Lawmakers are drawing attention to a question that has followed other technologies before. Who built it, and who might still have access to it after it is deployed? As robotics becomes more common in public spaces, homes and workplaces, those questions will only get louder. The technology is moving fast. Policy is trying to catch up.
Would you feel comfortable interacting with a humanoid robot if you did not know who ultimately controlled its data? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
The best AirPods deals you can get right now
If you know where to look, you can often score deals on Apple’s ever-expanding AirPods lineup. Both the AirPods Pro 3 and the AirPods 4 (with and without ANC) now consistently receive discounts, as do the AirPods Pro 3. And while major shopping events like Black Friday and Amazon Prime Day have delivered some of the biggest price drops, there are still good deals to be found on every model — including the recently released AirPods Max 2.
Below, we’ve rounded up the best deals currently available on each set of AirPods, including both iterations of the AirPods 4 and AirPods Max, as well as the third-gen AirPods Pro.
At the end of 2024, Apple introduced the AirPods 4, a pair of wireless earbuds available in two variations: a $129 standard model and a $179 noise-canceling model. Both versions represent significant upgrades over the third-gen AirPods, with a more comfortable design and improved audio performance. They’re also better for taking calls thanks to Apple’s Voice Isolation feature, which focuses the mics on your voice so you can be heard more clearly in noisy environments.
The $179 AirPods 4 with Active Noise Cancellation offer a surprisingly effective noise-canceling mode, a helpful transparency mode, and several other Pro-level features. The latest AirPods Pro do a better job of tuning out noise, but the AirPods 4 with ANC still do a good job of reducing sound. They also feature other perks formerly reserved for Apple’s top-of-the-line earbuds, including wireless charging and a case with a built-in speaker that allows you to easily track it down via Apple’s Find My app.
Given they’ve been out for over a year, we consistently see discounts for both iterations of the AirPods 4. During Black Friday, we saw the standard model drop to a new low of $74; however, right now, they’re only down to $119 ($10 off) at Amazon, Walmart, and B&H Photo. The AirPods 4 with ANC, meanwhile, are on sale for $154.99 ($24 off) at Amazon, Walmart, and Costco (for members), which is significantly more than their recent low of $99.
The best AirPods Pro 3 deals
At its “Awe Dropping” event in September, Apple introduced the AirPods Pro 3. In addition to improved ANC and sound, the third-gen earbuds include a built-in heart rate sensor that syncs with the iPhone Fitness app, allowing you to track your pulse and calories burned across more than 50 workout types. They’re also more comfortable and secure than their predecessor, thanks to a redesigned, angled fit and five ear tip sizes — including a new XXS option. Additionally, they carry a more robust IP57 rating and support Apple’s new live translation feature, which, in our testing, generally conveys the gist well but still can’t beat a human interpreter.
Given how recently they launched, we’ve been surprised by how often the AirPods Pro 3 have been discounted. In fact, last month we saw them drop to $199 ($50 off), which is $15 shy of their all-time low. Unfortunately, while they’re still on sale, they’ve since increased in price to $224 ($25 off) at retailers like Amazon and Walmart.
The best AirPods Max deals
The AirPods Max aren’t the iconic in-ears that have become synonymous with the AirPods name. Both the first-gen Max and the newer AirPods Max 2 are large and luxurious, comprised of aluminum, steel, and mesh fabric that remains comfortable during extended listening sessions. The original pair delivered clear, expansive sound, great noise cancellation, and lossless audio over USB-C; however, with the Max 2, Apple built upon that excellent foundation with improved ANC and a built-in amplifier for better sound. They also feature Apple’s newer H2 chip, enabling AI-powered live translation, adaptive audio, and other features once reserved for the AirPods Pro line. The over-ears aren’t the best noise-canceling headphones for everyone — blame the sticker price — but for iPhone users, they’re hard to beat.
The AirPods Max 2 retail for $549 — the same price as the original model — but you can currently save $20 on both the black and white versions at Amazon and Costco (if you’re a member), which is the first discount we’ve seen on the recently released headphones. If you’re okay with picking up the last-gen model, the original AirPods Max with USB-C are on sale in select colors for $399.99 ($150 off) at Woot through April 3rd, matching their best price to date. They’re also available in a wider range of hues at Amazon, Walmart, Target, and other retailers for $449.99 ($100 off), which is still a hefty discount.
Update, April 2nd: Updated to reflect current pricing and availability, as well as the recent release of the AirPods Max 2.
-
Culture1 week agoWil Wheaton Discusses ‘Stand By Me’ and Narrating ‘The Body’ Audiobook
-
South-Carolina5 days agoSouth Carolina vs TCU predictions for Elite Eight game in March Madness
-
Miami, FL1 week agoJannik Sinner’s Girlfriend Laila Hasanovic Stuns in Ab-Revealing Post Amid Miami Open
-
Culture1 week agoWhat Happens When We Die? This Wallace Stevens Poem Has Thoughts.
-
Minneapolis, MN1 week agoBoy who shielded classmate during school shooting receives Medal of Honor
-
Vermont5 days ago
Skier dies after fall at Sugarbush Resort
-
Education1 week agoVideo: Transgender Athletes Barred From Women’s Olympic Events
-
Politics5 days agoTrump’s Ballroom Design Has Barely Been Scrutinized