Connect with us

Technology

Third-party breach exposes ChatGPT account details

Published

on

Third-party breach exposes ChatGPT account details

NEWYou can now listen to Fox News articles!

ChatGPT went from novelty to necessity in less than two years. It is now part of how you work, learn, write, code and search. OpenAI has said the service has roughly 800 million weekly active users, which puts it in the same weight class as the biggest consumer platforms in the world. 

When a tool becomes that central to your daily life, you assume the people running it can keep your data safe. That trust took a hit recently after OpenAI confirmed that personal information linked to API accounts had been exposed in a breach involving one of its third-party partners.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

The breach highlights how even trusted analytics partners can expose sensitive account details. (Kurt “CyberGuy” Knutsson)

Advertisement

What you need to know about the ChatGPT breach

OpenAI’s notification email places the breach squarely on Mixpanel, a major analytics provider the company used on its API platform. The email stresses that OpenAI’s own systems were not breached. No chat histories, billing information, passwords or API keys were exposed. Instead, the stolen data came from Mixpanel’s environment and included names, email addresses, Organization IDs, coarse location and technical metadata from user browsers. 

FAKE CHATGPT APPS ARE HIJACKING YOUR PHONE WITHOUT YOU KNOWING

That sounds harmless on the surface. The email calls this “limited” analytics data, but the label feels like PR cushioning more than anything else. For attackers, this kind of metadata is gold. A dataset that reveals who you are, where you work, what machine you use and how your account is structured gives threat actors everything they need to run targeted phishing and impersonation campaigns.

The biggest red flag is the exposure of Organization IDs. Anyone who builds on the OpenAI API knows how sensitive these identifiers are. They sit at the center of internal billing, usage limits, account hierarchy and support workflows. If an attacker quotes your Org ID during a fake billing alert or support request, it suddenly becomes very hard to dismiss the message as a scam.

OpenAI’s own reconstructed timeline raises bigger questions. Mixpanel first detected a smishing attack on November 8. Attackers accessed internal systems the next day and exported OpenAI’s data. That data was gone for more than two weeks before Mixpanel told OpenAI on November 25. Only then did OpenAI alert everyone. It is a long and worrying silent period, and it left API users exposed to targeted attacks without even knowing they were at risk. OpenAI says it cut Mixpanel off the next day.

Advertisement

The size of the risk and the policy problem behind it

The timing and the scale matter here. ChatGPT sits at the center of the generative AI boom. It does not just have consumer traffic. It has sensitive conversations from developers, employees, startups and enterprises. Even though the breach affected API accounts rather than consumer chat history, the exposure still highlights a wider issue. When a platform reaches almost a billion weekly users, any crack becomes a national-scale problem.

Regulators have been warning about this exact scenario. Vendor security is one of the weak links in modern tech policy. Data protection laws tend to focus on what a company does with the information you give them. They rarely provide strong guardrails around the entire chain of third-party services that process this data along the way. Mixpanel is not an obscure operator. It is a widely used analytics platform trusted by thousands of companies. Yet it still lost a dataset that should never have been accessible to an attacker.

Companies should treat analytics providers the same way they treat core infrastructure. If you cannot guarantee that your vendors follow the same security standards you do, you should not be collecting the data in the first place. For a platform as influential as ChatGPT, the responsibility is even higher. People do not fully understand how many invisible services sit behind a single AI query. They trust the brand they interact with, not the long list of partners behind it.

Attackers can use leaked metadata to craft convincing phishing emails that look legitimate. (Jaap Arriens/NurPhoto via Getty Images)

8 steps you can take to stay safer when using AI tools

If you rely on AI tools every day, it’s worth tightening your personal security before your data ends up floating around in someone else’s analytics dashboard. You cannot control how every vendor handles your information, but you can make it much harder for attackers to target you.

Advertisement

1) Use strong, unique passwords

Treat every AI account as if it holds something valuable because it does. Long, unique passwords stored in a reliable password manager reduce the fallout if one platform gets breached. This also protects you from credential stuffing, where attackers try the same password across multiple services.

Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.

2) Turn on phishing-resistant 2FA

AI platforms have become prime targets, so they rely on stronger 2FA. Use an authenticator app or a hardware security key. SMS codes can be intercepted or redirected, which makes them unreliable during large-scale phishing campaigns.

3) Use strong antivirus software

Another important step you can take to protect yourself from phishing attacks is to install strong antivirus software on your devices. This can also alert you to phishing emails and ransomware scams, helping you keep your personal information and digital assets safe. 

Advertisement

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. 

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

PARENTS BLAME CHATGPT FOR SON’S SUICIDE, LAWSUIT ALLEGES OPENAI WEAKENED SAFEGUARDS TWICE BEFORE TEEN’S DEATH

4) Limit what personal or sensitive data you share

Think twice before pasting private conversations, company documents, medical notes or addresses into a chat window. Many AI tools store recent history for model improvements unless you opt out, and some route data through external vendors. Anything you paste could live on longer than you expect.

5) Use a data-removal service to shrink your online footprint

Attackers often combine leaked metadata with information they pull from people-search sites and old listings. A good data-removal service scans the web for exposed personal details and submits removal requests on your behalf. Some services even let you send custom links for takedowns. Cleaning up these traces makes targeted phishing and impersonation attacks much harder to pull off.

Advertisement

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

6) Treat unexpected support messages with suspicion

Attackers know users panic when they hear about API limits, billing failures or account verification issues. If you get an email claiming to be from an AI provider, do not click the link. Open the site manually or use the official app to confirm whether the alert is real.

Events like this show why strengthening your personal security habits matters more than ever. (Kurt “CyberGuy” Knutsson)

Advertisement

7) Keep your devices and software updated

A lot of attacks succeed because devices run outdated operating systems or browsers. Regular updates close vulnerabilities that could be used to steal session tokens, capture keystrokes or hijack login flows. Updates are boring, but they prevent a surprising amount of trouble.

8) Delete accounts you no longer need

Old accounts sit around with old passwords and old data, and they become easy targets. If you’re not actively using a particular AI tool anymore, delete it from your account list and remove any saved information. It reduces your exposure and limits how many databases contain your details.

Kurt’s key takeaway

This breach may not have touched chat logs or payment details, but it shows how fragile the wider AI ecosystem can be. Your data is only as safe as the least secure partner in the chain. With ChatGPT now approaching a billion weekly users, that chain needs tighter rules, better oversight and fewer blind spots. If anything, this should be a reminder that the rush toward AI adoption needs stronger policy guardrails. Companies cannot hide behind transparent emails after the fact. They need to prove that the tools you rely on every day are secure at every layer, including the ones you never see.

Do you trust AI platforms with your personal information? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Sign up for my FREE CyberGuy Report 
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Amazon.com says things are fixed after some issues with logging in and checking out

Published

on

Amazon.com says things are fixed after some issues with logging in and checking out

If you were having issues shopping on Amazon or loading your playlists on Amazon Music on Thursday, you weren’t alone. For over three hours today, Downdetector showed a sizable spike in people reporting issues with checkout, search, and logging in. The problem seemed to be affecting both the site and the mobile apps. But an Amazon spokesperson tells The Verge that the issues are now fixed.

“We’re sorry that some customers may have temporarily experienced issues while shopping,” Amazon spokesperson Jennie Bryant says in a statement. “We have resolved the issue, which was related to a software code deployment, and website and app are now running smoothly.”

Several Verge staffers experienced issues themselves when there were problems. Clicking through to many products produced a “sorry, something went wrong” error, and even pages that did load were not showing pricing. Users reported being repeatedly logged out of their accounts when trying to check out or load their cart. Even the parts of Amazon.com that were working seem to be loading slowly.

The company has been dealing with AWS outages in Bahrain and the United Arab Emirates due to drone strikes by the Iranian military, but there has not been any word of more widespread outages in the US or elsewhere.

Update March 5th: Added comment from Amazon saying that things are fixed.

Advertisement
Continue Reading

Technology

$163K in fake medical bill charges; AI uncovers it for you

Published

on

3K in fake medical bill charges; AI uncovers it for you

NEWYou can now listen to Fox News articles!

Last summer, a man’s brother-in-law suffered a fatal heart attack. The hospital bill for four hours of emergency care: $195,628.

The man’s sister-in-law was ready to pay it. He asked her to wait. He requested an itemized bill with CPT codes, the universal billing codes hospitals use, and fed the whole thing into Claude, an AI chatbot.

Within minutes, Claude found duplicate charges, services billed as “inpatient” even though the patient was never admitted, supply costs inflated by 500% to 2,300% above Medicare rates and charges for procedures that never happened. He cross-checked with ChatGPT. Both AIs agreed. He wrote a six-page letter citing every violation by name.

The hospital dropped the bill to $33,000. An 83% reduction. Zero medical training. A $20 app.

Advertisement

A man cross-checked a hospital bill with AI and got it reduced by some 83%. (Neil Godwin/Getty Images)

Your bill is probably wrong, too

That story sounds extreme. It’s not.

The Medical Billing Advocates of America estimates 3 out of 4 medical bills contain errors. The average hospital bill over $10,000 has roughly $1,300 in mistakes. And less than 1% of denied insurance claims are ever appealed. Hospitals and insurers are banking on the fact that you won’t check.

AI flips that equation. You don’t need to understand CPT codes or have a medical billing degree. You just need to paste.

You can use AI platforms, like ChatGPT, to spot errors or suspicious charges on medical bills. (Jaap Arriens/NurPhoto via Getty Images)

Advertisement

The 5-minute audit

Step 1: Call your provider and request an itemized bill with CPT codes. Not the summary. The full line-by-line breakdown. You’re legally entitled to this.

Step 2: Open ChatGPT, Claude, Grok or Gemini (free versions work) and paste this:

“I’m pasting my itemized medical bill below. Please: (1) Explain every charge in plain English, (2) Flag any duplicate or suspicious charges, (3) Compare each charge to average costs, (4) Identify billing code errors or bundling violations, and (5) Draft a dispute letter I can send to the billing department. Here’s my bill:”

Step 3: Paste your bill. The AI will translate every line and tell you what looks wrong.

WOMAN SAYS CHATGPT SAVED HER LIFE BY HELPING DETECT CANCER, WHICH DOCTORS MISSED

Advertisement

If the AI finds errors, call the billing department and ask for a supervisor. (iStock)

Step 4: If the AI finds errors (it probably will), call the billing department and ask for a supervisor. Reference the specific codes. Hospitals resolve disputes all the time when patients show up prepared.

Pro tip: Counterforce Health (counterforcehealth.org) is a free AI tool built specifically for insurance denial appeals. Worth bookmarking.

It’s time to give your medical bills a thorough examination. The AI will see you now.

Real talk. Everybody’s talking about AI. Nobody’s showing you what to actually DO with it. My new free newsletter, Splash of AI (SplashofAI.com), gives you one trick, one tool and one “wait, I can do THAT?” moment every single week. Five minutes. Plain English. The kind of stuff that saves you time, money or both. You’ll wonder how you got by without it.

Advertisement

Send this to someone who is staring at a medical bill they can’t make sense of. Forward this right now. Seriously. This could save them hundreds or even thousands of dollars, and it takes less time than making coffee.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Get tech-smarter. Starting today.

Kim Komando cuts through the tech noise so you don’t have to. Real advice. Zero jargon. Every single day.

Catch the national radio show on 500-plus stations, get the free daily newsletter, watch on YouTube or listen to the podcast wherever you get your shows. It’s all waiting at Komando.com.

Copyright 2026, WestStar Multimedia Entertainment. All rights reserved.

Advertisement

Related Article

ChatGPT could miss your serious medical emergency, new study suggests
Continue Reading

Technology

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Published

on

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Meta’s AI-powered smart glasses could be sending sensitive footage to human reviewers in Nairobi, Kenya, according to an investigation by the Swedish outlets Svenska Dagbladet and Göteborgs-Posten. The report, which was published last week, claims Meta contractors in Kenya have seen videos captured with the smart glasses that show “bathroom visits, sex and other intimate moments.”

So far, at least one proposed class action lawsuit accusing Meta of violating false advertising and privacy laws has emerged in response to Svenska Dagbladet’s reporting, citing the company’s claim that its smart glasses are designed for privacy:

By affirmatively claiming that the Glasses were designed to protect privacy, Meta assumed a duty to disclose material facts that would inform a reasonable consumer’s decision to purchase the product. Instead, Meta hid the alarming reality: that use of the AI features results in a stranger halfway around the world watching the most private moments of a person’s life.

The Nairobi-based contractors interviewed by Svenska Dagbladet are AI annotators, meaning they label images, text, or audio, with the goal of helping AI systems make sense of the data they’re training on. “We see everything — from living rooms to naked bodies,” one worker says, according to Svenska Dagbladet. “Meta has that type of content in its databases.”

A former Meta employee reportedly tells Svenska Dagbladet that faces in annotation data are blurred automatically, though workers in Kenya say this “does not always work as intended,” and some faces are still visible. Another person reportedly tells the outlet that a wearer’s bank cards are sometimes seen in the footage they review as well.

Meta’s Ray-Ban and Oakley smart glasses come with a built-in AI assistant capable of answering questions about what a user can see. The glasses have soared in popularity in recent years, despite growing concerns over privacy and surveillance.

Advertisement

EssilorLuxottica, the eyewear giant that Meta works with to develop the camera-equipped glasses, sold over 7 million of the AI-powered glasses in 2025 — more than tripling its sales in 2023 and 2024 combined. Last year, Meta made some changes to its privacy policy that keep Meta AI with camera use enabled on your glasses “unless you turn off ‘Hey Meta.’” It also stopped allowing wearers to opt out of storing their voice recordings in the cloud.

As reported by Svenska Dagbladet, the Kenya-based AI reviewers work with transcriptions as well, ensuring Meta AI provides the correct answer to the questions users ask aloud. In a statement to The Verge, Meta spokesperson Tracy Clayton says media captured by its smart glasses “stays on the user’s device” unless they choose to share it with other people or Meta.

“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do,” Clayton says. “We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”

Continue Reading
Advertisement

Trending