Technology
Third-party breach exposes ChatGPT account details
NEWYou can now listen to Fox News articles!
ChatGPT went from novelty to necessity in less than two years. It is now part of how you work, learn, write, code and search. OpenAI has said the service has roughly 800 million weekly active users, which puts it in the same weight class as the biggest consumer platforms in the world.
When a tool becomes that central to your daily life, you assume the people running it can keep your data safe. That trust took a hit recently after OpenAI confirmed that personal information linked to API accounts had been exposed in a breach involving one of its third-party partners.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
The breach highlights how even trusted analytics partners can expose sensitive account details. (Kurt “CyberGuy” Knutsson)
What you need to know about the ChatGPT breach
OpenAI’s notification email places the breach squarely on Mixpanel, a major analytics provider the company used on its API platform. The email stresses that OpenAI’s own systems were not breached. No chat histories, billing information, passwords or API keys were exposed. Instead, the stolen data came from Mixpanel’s environment and included names, email addresses, Organization IDs, coarse location and technical metadata from user browsers.
FAKE CHATGPT APPS ARE HIJACKING YOUR PHONE WITHOUT YOU KNOWING
That sounds harmless on the surface. The email calls this “limited” analytics data, but the label feels like PR cushioning more than anything else. For attackers, this kind of metadata is gold. A dataset that reveals who you are, where you work, what machine you use and how your account is structured gives threat actors everything they need to run targeted phishing and impersonation campaigns.
The biggest red flag is the exposure of Organization IDs. Anyone who builds on the OpenAI API knows how sensitive these identifiers are. They sit at the center of internal billing, usage limits, account hierarchy and support workflows. If an attacker quotes your Org ID during a fake billing alert or support request, it suddenly becomes very hard to dismiss the message as a scam.
OpenAI’s own reconstructed timeline raises bigger questions. Mixpanel first detected a smishing attack on November 8. Attackers accessed internal systems the next day and exported OpenAI’s data. That data was gone for more than two weeks before Mixpanel told OpenAI on November 25. Only then did OpenAI alert everyone. It is a long and worrying silent period, and it left API users exposed to targeted attacks without even knowing they were at risk. OpenAI says it cut Mixpanel off the next day.
The size of the risk and the policy problem behind it
The timing and the scale matter here. ChatGPT sits at the center of the generative AI boom. It does not just have consumer traffic. It has sensitive conversations from developers, employees, startups and enterprises. Even though the breach affected API accounts rather than consumer chat history, the exposure still highlights a wider issue. When a platform reaches almost a billion weekly users, any crack becomes a national-scale problem.
Regulators have been warning about this exact scenario. Vendor security is one of the weak links in modern tech policy. Data protection laws tend to focus on what a company does with the information you give them. They rarely provide strong guardrails around the entire chain of third-party services that process this data along the way. Mixpanel is not an obscure operator. It is a widely used analytics platform trusted by thousands of companies. Yet it still lost a dataset that should never have been accessible to an attacker.
Companies should treat analytics providers the same way they treat core infrastructure. If you cannot guarantee that your vendors follow the same security standards you do, you should not be collecting the data in the first place. For a platform as influential as ChatGPT, the responsibility is even higher. People do not fully understand how many invisible services sit behind a single AI query. They trust the brand they interact with, not the long list of partners behind it.
Attackers can use leaked metadata to craft convincing phishing emails that look legitimate. (Jaap Arriens/NurPhoto via Getty Images)
8 steps you can take to stay safer when using AI tools
If you rely on AI tools every day, it’s worth tightening your personal security before your data ends up floating around in someone else’s analytics dashboard. You cannot control how every vendor handles your information, but you can make it much harder for attackers to target you.
1) Use strong, unique passwords
Treat every AI account as if it holds something valuable because it does. Long, unique passwords stored in a reliable password manager reduce the fallout if one platform gets breached. This also protects you from credential stuffing, where attackers try the same password across multiple services.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
2) Turn on phishing-resistant 2FA
AI platforms have become prime targets, so they rely on stronger 2FA. Use an authenticator app or a hardware security key. SMS codes can be intercepted or redirected, which makes them unreliable during large-scale phishing campaigns.
3) Use strong antivirus software
Another important step you can take to protect yourself from phishing attacks is to install strong antivirus software on your devices. This can also alert you to phishing emails and ransomware scams, helping you keep your personal information and digital assets safe.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
PARENTS BLAME CHATGPT FOR SON’S SUICIDE, LAWSUIT ALLEGES OPENAI WEAKENED SAFEGUARDS TWICE BEFORE TEEN’S DEATH
4) Limit what personal or sensitive data you share
Think twice before pasting private conversations, company documents, medical notes or addresses into a chat window. Many AI tools store recent history for model improvements unless you opt out, and some route data through external vendors. Anything you paste could live on longer than you expect.
5) Use a data-removal service to shrink your online footprint
Attackers often combine leaked metadata with information they pull from people-search sites and old listings. A good data-removal service scans the web for exposed personal details and submits removal requests on your behalf. Some services even let you send custom links for takedowns. Cleaning up these traces makes targeted phishing and impersonation attacks much harder to pull off.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
6) Treat unexpected support messages with suspicion
Attackers know users panic when they hear about API limits, billing failures or account verification issues. If you get an email claiming to be from an AI provider, do not click the link. Open the site manually or use the official app to confirm whether the alert is real.
Events like this show why strengthening your personal security habits matters more than ever. (Kurt “CyberGuy” Knutsson)
7) Keep your devices and software updated
A lot of attacks succeed because devices run outdated operating systems or browsers. Regular updates close vulnerabilities that could be used to steal session tokens, capture keystrokes or hijack login flows. Updates are boring, but they prevent a surprising amount of trouble.
8) Delete accounts you no longer need
Old accounts sit around with old passwords and old data, and they become easy targets. If you’re not actively using a particular AI tool anymore, delete it from your account list and remove any saved information. It reduces your exposure and limits how many databases contain your details.
Kurt’s key takeaway
This breach may not have touched chat logs or payment details, but it shows how fragile the wider AI ecosystem can be. Your data is only as safe as the least secure partner in the chain. With ChatGPT now approaching a billion weekly users, that chain needs tighter rules, better oversight and fewer blind spots. If anything, this should be a reminder that the rush toward AI adoption needs stronger policy guardrails. Companies cannot hide behind transparent emails after the fact. They need to prove that the tools you rely on every day are secure at every layer, including the ones you never see.
Do you trust AI platforms with your personal information? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Google’s annual revenue tops $400 billion for the first time
Google’s parent company, Alphabet, has earned more than $400 billion in annual revenue for the first time. The company announced the milestone as part of its Q4 2025 earnings report released on Wednesday, which highlights the 15 percent year-over-year increase as its cloud business and YouTube continue to grow.
As noted in the earnings report, Google’s Cloud business reached a $70 billion run rate in 2025, while YouTube’s annual revenue soared beyond $60 billion across ads and subscriptions. Alphabet CEO Sundar Pichai told investors that YouTube remains the “number one streamer,” citing data from Nielsen. The company also now has more than 325 million paid subscribers, led by Google One and YouTube Premium.
Additionally, Pichai noted that Google Search saw more usage over the past few months “than ever before,” adding that daily AI Mode queries have doubled since launch. Google will soon take advantage of the popularity of its Gemini app and AI Mode, as it plans to build an agentic checkout feature into both tools.
Technology
Waymo under federal investigation after child struck
NEWYou can now listen to Fox News articles!
Federal safety regulators are once again taking a hard look at self-driving cars after a serious incident involving Waymo, the autonomous vehicle company owned by Alphabet.
This time, the investigation centers on a Waymo vehicle that struck a child near an elementary school in Santa Monica, California, during morning drop-off hours. The crash happened Jan. 23 and raised immediate questions about how autonomous vehicles behave around children, school zones and unpredictable pedestrian movement.
On Jan. 29, the National Highway Traffic Safety Administration confirmed it had opened a new preliminary investigation into Waymo’s automated driving system.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
TESLA’S SELF-DRIVING CARS UNDER FIRE AGAIN
Waymo operates Level 4 self-driving vehicles in select U.S. cities, where the car controls all driving tasks without a human behind the wheel. (AP Photo/Terry Chea, File)
What happened near the Santa Monica school?
According to documents posted by NHTSA, the crash occurred within two blocks of an elementary school during normal drop-off hours. The area was busy. There were multiple children present, a crossing guard on duty and several vehicles double-parked along the street.
Investigators say the child ran into the roadway from behind a double-parked SUV while heading toward the school. The Waymo vehicle struck the child, who suffered minor injuries. No safety operator was inside the vehicle at the time.
NHTSA’s Office of Defects Investigation is now examining whether the autonomous system exercised appropriate caution given its proximity to a school zone and the presence of young pedestrians.
AI TRUCK SYSTEM MATCHES TOP HUMAN DRIVERS IN MASSIVE SAFETY SHOWDOWN WITH PERFECT SCORES
Federal investigators are now examining whether Waymo’s automated system exercised enough caution near a school zone during morning drop-off hours. (Waymo)
Why federal investigators stepped in
The NHTSA says the investigation will focus on how Waymo’s automated driving system is designed to behave in and around school zones, especially during peak pickup and drop-off times.
That includes whether the vehicle followed posted speed limits, how it responded to visual cues like crossing guards and parked vehicles and whether its post-crash response met federal safety expectations. The agency is also reviewing how Waymo handled the incident after it occurred.
Waymo said it voluntarily contacted regulators the same day as the crash and plans to cooperate fully with the investigation. In a statement, the company said it remains committed to improving road safety for riders and everyone sharing the road.
Waymo responds to the federal investigation
We reached out to Waymo for comment, and the company provided the following statement:
“At Waymo, we are committed to improving road safety, both for our riders and all those with whom we share the road. Part of that commitment is being transparent when incidents occur, which is why we are sharing details regarding an event in Santa Monica, California, on Friday, January 23, where one of our vehicles made contact with a young pedestrian. Following the event, we voluntarily contacted the National Highway Traffic Safety Administration (NHTSA) that same day. NHTSA has indicated to us that they intend to open an investigation into this incident, and we will cooperate fully with them throughout the process.
“The event occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into our vehicle’s path. Our technology immediately detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph before contact was made.
“To put this in perspective, our peer-reviewed model shows that a fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph. This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver.
“Following contact, the pedestrian stood up immediately, walked to the sidewalk and we called 911. The vehicle remained stopped, moved to the side of the road and stayed there until law enforcement cleared the vehicle to leave the scene.
This event demonstrates the critical value of our safety systems. We remain committed to improving road safety where we operate as we continue on our mission to be the world’s most trusted driver.”
Understanding Waymo’s autonomy level
Waymo vehicles fall under Level 4 autonomy on NHTSA’s six-level scale.
At Level 4, the vehicle handles all driving tasks within specific service areas. A human driver is not required to intervene, and no safety operator needs to be present inside the car. However, these systems do not operate everywhere and are currently limited to ride-hailing services in select cities.
The NHTSA has been clear that Level 4 vehicles are not available for consumer purchase, even though passengers may ride inside them.
This is not Waymo’s first federal probe
This latest investigation follows a previous NHTSA evaluation that opened in May 2024. That earlier probe examined reports of Waymo vehicles colliding with stationary objects like gates, chains and parked cars. Regulators also reviewed incidents in which the vehicles appeared to disobey traffic control devices.
That investigation was closed in July 2025 after regulators reviewed the data and Waymo’s responses. Safety advocates say the new incident highlights unresolved concerns.
UBER UNVEILS A NEW ROBOTAXI WITH NO DRIVER BEHIND THE WHEEL
No safety operator was inside the vehicle at the time of the crash, raising fresh questions about how autonomous cars handle unpredictable situations involving children. (Waymo)
What this means for you
If you live in a city where self-driving cars operate, this investigation matters more than it might seem. School zones are already high-risk areas, even for attentive human drivers. Autonomous vehicles must be able to detect unpredictable behavior, anticipate sudden movement and respond instantly when children are present.
This case will likely influence how regulators set expectations for autonomous driving systems near schools, playgrounds and other areas with vulnerable pedestrians. It could also shape future rules around local oversight, data reporting and operational limits for self-driving fleets.
For parents, commuters and riders, the outcome may affect where and when autonomous vehicles are allowed to operate.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
Self-driving technology promises safer roads, fewer crashes and less human error. But moments like this remind us that the hardest driving scenarios often involve human unpredictability, especially when children are involved. Federal investigators now face a crucial question: Did the system act as cautiously as it should have in one of the most sensitive driving environments possible? How they answer that question could help define the next phase of autonomous vehicle regulation in the United States.
Do you feel comfortable sharing the road with self-driving cars near schools, or is that a line technology should not cross yet? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Adobe actually won’t discontinue Animate
Adobe is no longer planning to discontinue Adobe Animate on March 1st. In an FAQ, the company now says that Animate will now be in maintenance mode and that it has “no plans to discontinue or remove access” to the app. Animate will still receive “ongoing security and bug fixes” and will still be available for “both new and existing users,” but it won’t get new features.
An announcement email that went out to Adobe Animate customers about the discontinuation did “not meet our standards and caused a lot of confusion and angst within the community,” according to a Reddit post from Adobe community team member Mike Chambers.
Animate will be available in maintenance mode “indefinitely” to “individual, small business, and enterprise customers,” according to Adobe. Before the change, Adobe said that non-enterprise customers could access Animate and download content until March 1st, 2027, while enterprise customers had until March 1st, 2029.
-
Indiana3 days ago13-year-old rider dies following incident at northwest Indiana BMX park
-
Massachusetts4 days agoTV star fisherman, crew all presumed dead after boat sinks off Massachusetts coast
-
Tennessee5 days agoUPDATE: Ohio woman charged in shooting death of West TN deputy
-
Pennsylvania1 week agoRare ‘avalanche’ blocks Pennsylvania road during major snowstorm
-
Movie Reviews1 week agoVikram Prabhu’s Sirai Telugu Dubbed OTT Movie Review and Rating
-
Indiana3 days ago13-year-old boy dies in BMX accident, officials, Steel Wheels BMX says
-
Culture1 week agoTry This Quiz on Oscar-Winning Adaptations of Popular Books
-
Politics6 days agoVirginia Democrats seek dozens of new tax hikes, including on dog walking and dry cleaning