In January, folk artist Murphy Campbell discovered several songs on her Spotify profile that did not belong there. They were songs that she had recorded, but she’d never uploaded them to Spotify, and something was off about the vocals.
Technology
Third-party breach exposes ChatGPT account details
NEWYou can now listen to Fox News articles!
ChatGPT went from novelty to necessity in less than two years. It is now part of how you work, learn, write, code and search. OpenAI has said the service has roughly 800 million weekly active users, which puts it in the same weight class as the biggest consumer platforms in the world.
When a tool becomes that central to your daily life, you assume the people running it can keep your data safe. That trust took a hit recently after OpenAI confirmed that personal information linked to API accounts had been exposed in a breach involving one of its third-party partners.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
The breach highlights how even trusted analytics partners can expose sensitive account details. (Kurt “CyberGuy” Knutsson)
What you need to know about the ChatGPT breach
OpenAI’s notification email places the breach squarely on Mixpanel, a major analytics provider the company used on its API platform. The email stresses that OpenAI’s own systems were not breached. No chat histories, billing information, passwords or API keys were exposed. Instead, the stolen data came from Mixpanel’s environment and included names, email addresses, Organization IDs, coarse location and technical metadata from user browsers.
FAKE CHATGPT APPS ARE HIJACKING YOUR PHONE WITHOUT YOU KNOWING
That sounds harmless on the surface. The email calls this “limited” analytics data, but the label feels like PR cushioning more than anything else. For attackers, this kind of metadata is gold. A dataset that reveals who you are, where you work, what machine you use and how your account is structured gives threat actors everything they need to run targeted phishing and impersonation campaigns.
The biggest red flag is the exposure of Organization IDs. Anyone who builds on the OpenAI API knows how sensitive these identifiers are. They sit at the center of internal billing, usage limits, account hierarchy and support workflows. If an attacker quotes your Org ID during a fake billing alert or support request, it suddenly becomes very hard to dismiss the message as a scam.
OpenAI’s own reconstructed timeline raises bigger questions. Mixpanel first detected a smishing attack on November 8. Attackers accessed internal systems the next day and exported OpenAI’s data. That data was gone for more than two weeks before Mixpanel told OpenAI on November 25. Only then did OpenAI alert everyone. It is a long and worrying silent period, and it left API users exposed to targeted attacks without even knowing they were at risk. OpenAI says it cut Mixpanel off the next day.
The size of the risk and the policy problem behind it
The timing and the scale matter here. ChatGPT sits at the center of the generative AI boom. It does not just have consumer traffic. It has sensitive conversations from developers, employees, startups and enterprises. Even though the breach affected API accounts rather than consumer chat history, the exposure still highlights a wider issue. When a platform reaches almost a billion weekly users, any crack becomes a national-scale problem.
Regulators have been warning about this exact scenario. Vendor security is one of the weak links in modern tech policy. Data protection laws tend to focus on what a company does with the information you give them. They rarely provide strong guardrails around the entire chain of third-party services that process this data along the way. Mixpanel is not an obscure operator. It is a widely used analytics platform trusted by thousands of companies. Yet it still lost a dataset that should never have been accessible to an attacker.
Companies should treat analytics providers the same way they treat core infrastructure. If you cannot guarantee that your vendors follow the same security standards you do, you should not be collecting the data in the first place. For a platform as influential as ChatGPT, the responsibility is even higher. People do not fully understand how many invisible services sit behind a single AI query. They trust the brand they interact with, not the long list of partners behind it.
Attackers can use leaked metadata to craft convincing phishing emails that look legitimate. (Jaap Arriens/NurPhoto via Getty Images)
8 steps you can take to stay safer when using AI tools
If you rely on AI tools every day, it’s worth tightening your personal security before your data ends up floating around in someone else’s analytics dashboard. You cannot control how every vendor handles your information, but you can make it much harder for attackers to target you.
1) Use strong, unique passwords
Treat every AI account as if it holds something valuable because it does. Long, unique passwords stored in a reliable password manager reduce the fallout if one platform gets breached. This also protects you from credential stuffing, where attackers try the same password across multiple services.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
2) Turn on phishing-resistant 2FA
AI platforms have become prime targets, so they rely on stronger 2FA. Use an authenticator app or a hardware security key. SMS codes can be intercepted or redirected, which makes them unreliable during large-scale phishing campaigns.
3) Use strong antivirus software
Another important step you can take to protect yourself from phishing attacks is to install strong antivirus software on your devices. This can also alert you to phishing emails and ransomware scams, helping you keep your personal information and digital assets safe.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
PARENTS BLAME CHATGPT FOR SON’S SUICIDE, LAWSUIT ALLEGES OPENAI WEAKENED SAFEGUARDS TWICE BEFORE TEEN’S DEATH
4) Limit what personal or sensitive data you share
Think twice before pasting private conversations, company documents, medical notes or addresses into a chat window. Many AI tools store recent history for model improvements unless you opt out, and some route data through external vendors. Anything you paste could live on longer than you expect.
5) Use a data-removal service to shrink your online footprint
Attackers often combine leaked metadata with information they pull from people-search sites and old listings. A good data-removal service scans the web for exposed personal details and submits removal requests on your behalf. Some services even let you send custom links for takedowns. Cleaning up these traces makes targeted phishing and impersonation attacks much harder to pull off.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
6) Treat unexpected support messages with suspicion
Attackers know users panic when they hear about API limits, billing failures or account verification issues. If you get an email claiming to be from an AI provider, do not click the link. Open the site manually or use the official app to confirm whether the alert is real.
Events like this show why strengthening your personal security habits matters more than ever. (Kurt “CyberGuy” Knutsson)
7) Keep your devices and software updated
A lot of attacks succeed because devices run outdated operating systems or browsers. Regular updates close vulnerabilities that could be used to steal session tokens, capture keystrokes or hijack login flows. Updates are boring, but they prevent a surprising amount of trouble.
8) Delete accounts you no longer need
Old accounts sit around with old passwords and old data, and they become easy targets. If you’re not actively using a particular AI tool anymore, delete it from your account list and remove any saved information. It reduces your exposure and limits how many databases contain your details.
Kurt’s key takeaway
This breach may not have touched chat logs or payment details, but it shows how fragile the wider AI ecosystem can be. Your data is only as safe as the least secure partner in the chain. With ChatGPT now approaching a billion weekly users, that chain needs tighter rules, better oversight and fewer blind spots. If anything, this should be a reminder that the rush toward AI adoption needs stronger policy guardrails. Companies cannot hide behind transparent emails after the fact. They need to prove that the tools you rely on every day are secure at every layer, including the ones you never see.
Do you trust AI platforms with your personal information? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Buy two Nintendo Switch games, get $30 off at Target
Target is offering a great deal to some Target Circle members that knocks $30 off the cost of two Nintendo Switch and Switch 2 games. The sale is happening for the rest of the day, expiring at 2:59AM ET on April 5th. If you sign in with the free-to-join membership, you might be able to add two eligible games to your cart, then watch the prices fall at checkout.
There are 224 eligible games (some physical, some digital), and many of Nintendo’s biggest hits from the past year and beyond are here, including Switch 2-exclusive games like Donkey Kong Bananza, Kirby Air Riders, Mario Kart World, Mario Tennis Fever, and more (I didn’t see Pokémon Pokopia in the list, though).
This deal is worth hopping on whether you intend to gift these games, or just get them for yourself. Discounts on Nintendo-published games are rare, and it’s quite a nice perk that Target Circle members have in getting to choose the games they want to save on.
While each of the games that I mentioned ship on cartridges that don’t require a bunch of your console’s internal storage (just enough for save data), there are some Switch 2 games that ship on Game Key Cards. Those cartridges, once inserted into the console, simply grant you the ability to download a copy from the Nintendo eShop onto your console. Game sizes varies, but you may want to pick up a microSD Express card to add more storage on top of the Switch 2’s 256GB built-in SSD. This 256GB Samsung model is $59 at Amazon.
Technology
How to opt out of AI data collection in popular apps
NEWYou can now listen to Fox News articles!
Every time you ask ChatGPT a question, say “Hey Siri” or let Google finish your sentence, something else may happen in the background. In many cases, you are helping train the AI that responds to you.
Most people do not realize this. However, many AI platforms use conversations to improve their systems. As a result, your questions, your voice and your habits can be stored and reused by some of the world’s largest tech companies.
That said, you are not stuck with these settings. You can turn off much of this data collection if you know where to look. Even better, it only takes about 15 minutes across the major platforms. Here is exactly how.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com — trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
5 SIMPLE TECH TIPS TO TAKE BACK CONTROL OF YOUR SOCIAL MEDIA
What AI apps are quietly collecting about you
AI assistants are designed to feel like a private conversation. But, depending on the platform, what’s collected often goes well beyond what you typed or said:
- Full conversation transcripts
- Voice recordings and audio clips
- Location data and device identifiers
- Browsing habits and search history
- Names, routines and personal details you mention in passing
- App usage patterns across your devices.
Almost none of this is turned off by default. You have to go find the switch yourself.
Think about what you’ve actually shared lately
Here’s a quick thought experiment. In the last month, have you asked an AI assistant about:
- A health symptom you were worried about?
- A financial decision you were weighing?
- A family situation you needed advice on?
- Your child’s schedule, school or activities?
Each detail seems harmless on its own. But, together, they create a surprisingly detailed picture of your life, one that could be stored indefinitely, reviewed by human contractors or exposed in a data breach.
In 2023, Samsung engineers accidentally leaked sensitive internal code by pasting it into ChatGPT. Most people don’t have an IT department watching out for them. But everyone can take a few minutes to adjust their settings.
How to opt out platform by platform
This doesn’t mean you should stop using AI tools. They can be incredibly useful. But it’s worth understanding what’s being collected and what you can turn off right now.
1) ChatGPT (OpenAI)
By default, your conversations may be used to help improve AI models, but you can turn this off at any time.
To turn this off:
- Open ChatGPT
- Tap or click your profile icon
- Select Settings
- Go to Data Controls
- Toggle off “Improve the model for everyone”
You can also go to Settings > Data Controls > Export data to download everything OpenAI has stored, or select Delete all chats to wipe your history. Note that even with training off, OpenAI retains conversations for up to 30 days for safety monitoring.
Turning off “Improve the model for everyone” stops your ChatGPT conversations from being used for training. (Kurt “CyberGuy” Knutsson)
2) Google (Gemini & AI features)
Google’s AI tools, including Gemini and Search’s AI Overviews, are tied to your Google account activity.
To manage this:
- Go to myactivity.google.com
- Select Web & App Activity and turn it off, or set auto-delete to three months
- Separately, visit gemini.google.com > Settings > Gemini Apps Activity and toggle it off
Keep in mind that disabling activity tracking may affect personalization across Gmail, Maps and other Google services.
DATA BROKERS ACCUSED OF HIDING OPT-OUT PAGES FROM GOOGLE
Google’s Gemini activity settings show how your AI interactions may still be stored unless you delete them. (Kurt “CyberGuy” Knutsson)
3) Microsoft Copilot
Copilot is built into Windows, Microsoft 365 and Edge, so it can access a wide range of your documents and activity.
To adjust your settings:
- Go to account.microsoft.com/privacy and sign in
- Click Privacy in the left-hand menu
- Scroll to App and service activity and review your recent activity
- Click Clear all activities or remove individual items
- Scroll down to App and service performance data, and clear that data if available
- Scroll further and select Copilot, then tap Manage data from Microsoft Copilot to review or delete your data
In Windows 11: Settings > Privacy & Security > Diagnostics & Feedback and turn off Optional diagnostic data
Microsoft does not offer one single switch that turns off all Copilot data collection, so you need to review settings in multiple places. Enterprise users should check with an IT administrator since organizational settings may also apply.
Microsoft’s privacy dashboard lets you review and clear app and service activity tied to your account. (Kurt “CyberGuy” Knutsson)
4) Amazon Alexa
Alexa stores voice recordings by default, and, in some cases, Amazon may have human reviewers listen to those recordings as part of its quality review process.
To turn off voice recording use:
- Open the Alexa app
- Tap More (upper left, three lines)
- Tap Alexa Privacy
- Scroll down and select Manage Your Alexa Data
- Tap Help Improve Alexa and turn off Use Voice Recordings
- Confirm your decision by tapping Turn off
To stop Alexa from keeping your recordings:
- Open the Alexa app
- Tap More (upper left, three lines)
- Tap Alexa Privacy
- Scroll down and select Manage Your Alexa Data
- Tap Voice Recordings and Transcripts
- Select Don’t retain
In the Alexa app, turning off voice recording use prevents Amazon from using your recordings to improve services. (Kurt “CyberGuy” Knutsson)
5) Apple Siri
Apple is generally more privacy-focused than other platforms, but Siri still collects data to improve its performance.
To limit Siri data collection:
- Go to Settings
- Tap Privacy & Security
- Tap Analytics & Improvements
- Turn off Share iPhone & Apple Watch Analytics
- Scroll down and turn off Improve Siri & Dictation
To delete your existing Siri history:
Go to Settings, Tap Siri or Apple Intelligence & Siri Tap Siri & Dictation History Tap Delete Siri & Dictation History
Disabling analytics on iPhone limits how Apple collects data to improve Siri and other features. (Kurt “CyberGuy” Knutsson)
Why AI privacy settings are only part of the solution
Adjusting these settings is an important step. But it only controls what these apps collect directly going forward. It doesn’t address the hundreds of websites that may already be publishing your personal information online, right now, without your knowledge.
Data brokers are still collecting your information
Data brokers do not need your AI chat history. Instead, they pull information from public records, marketing lists and people search databases. They also refresh these profiles constantly, which keeps your data active and easy to find.
As a result, your name, address, phone number and family members may already appear on dozens of sites you have never heard of. Unlike AI apps, these sites do not offer a single settings menu to turn this off.
While you can remove your data manually, the process takes hours and often requires repeated requests when your information gets reposted. In many cases, you need to revisit these sites regularly to keep your information from reappearing.
The goal is simple: make it much harder for strangers, scammers and cybercriminals to find your personal information online.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Kurt’s Key Takeaways
Spending just 15 minutes adjusting your AI privacy settings is one of the most effective steps you can take to protect your digital privacy right now. Most major platforms, including OpenAI, Google, Microsoft, Amazon and Apple, collect data by default. However, you can opt out, even though companies often bury these settings deep in menus.
As a result, many people never find them. At the same time, AI assistants feel private and conversational, so you may share more personal information than you realize. Even if you turn off data collection going forward, companies do not erase what they have already stored. In addition, these settings only control what happens inside each platform. Data brokers still build separate profiles about you using information pulled from across the internet.
Because of this, privacy is not a one-time fix. Instead, you need to check your settings regularly and stay aware of what you share. The good news is you do not have to stop using AI tools. Instead, take a few minutes this week to review your settings and make sure the rest of your digital footprint is not working against you.
How much personal data are you willing to let big tech companies collect from your everyday AI use? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
A folk musician became a target for AI fakes and a copyright troll
She quickly surmised that someone had pulled performances of the songs she posted to YouTube, created AI covers, and uploaded them to streaming platforms under her name. I ran one of the songs, “Four Marys”, through two different AI detectors, and it seemed to support her suspicions with both saying it was probably AI-generated.
Campbell was shocked, “I was kind of under the impression that we had a little bit more checks in place before someone could just do that. But, you know, a lesson learned there,” she told The Verge. It took some time before Campbell managed to get the fake songs removed, “I became a pest,” she said. And even then, it wasn’t a complete victory. While the offending tracks don’t appear to be available on YouTube Music or Apple Music anymore, at least one can still be found on Spotify, just under a different artist profile, but with the same name. There are now multiple Murphy Campbells — “Obviously, I was thrilled by that,” the real Murphy Campbell said.
Spotify is testing a new system that would allow artists to manually approve songs before they appear on their profile, but Campbell is skeptical after being burned. “I feel like, every time, an entity that’s that large makes a promise like that to musicians. It seems to just not be what they made it out to be, but I’ll be curious to try it out in the future,” she said.
This was just the beginning of Campbell’s nightmare, however.
On the day that a Rolling Stone article was published, discussing Campbell’s brush with AI imitators, a series of videos were uploaded to YouTube through distributor Vydia. Those videos have not been posted publicly, and it’s unclear if anyone other than the uploader, who goes by Murphy Rider, has seen them. YouTube declined to comment for this story.
Those were used to claim ownership of the material in several of Murphy Campbell’s videos. Campbell received a notice from YouTube reading: “You are now sharing revenues with the copyright owners of the music detected in your video, Darling Corey.” The most confusing part, the songs at the center of these claims are all in the public domain, including the classic “In the Pines,” which dates back to at least the 1870s and has been covered by everyone from Lead Belly to Nirvana (as “Where Did You Sleep Last Night”).
Vydia has since released those claims, and spokesperson Roy LaManna says the person who uploaded the videos has been banned from their platform. Of the over 6,000,000 claims filed by Vydia through YouTube’s Content ID system, 0.02 percent were found to be invalid, which LaManna says is, “by industry standards is like amazing.” Continuing, “we pride ourselves on doing this the right way.”
LaManna also says that Vydia has no connection to Timeless IR or the AI covers that were uploaded to streaming platforms under Campbell’s name. While the timing is certainly suspicious, LaManna says the two incidents are separate.
Vydia has received a lot of blowback including, LaManna says, “literal death threats” which have led to the offices being evacuated. Campbell isn’t about to let Vydia off the hook, but notes that it’s not solely to blame. The worlds of generative AI, music distribution, and copyright are complex with multiple points of failure and opportunities for abuse. “I think it goes way deeper than we think it does,” Campbell says.
-
South-Carolina1 week agoSouth Carolina vs TCU predictions for Elite Eight game in March Madness
-
Education1 week agoVideo: Transgender Athletes Barred From Women’s Olympic Events
-
Miami, FL1 week agoJannik Sinner’s Girlfriend Laila Hasanovic Stuns in Ab-Revealing Post Amid Miami Open
-
Minneapolis, MN1 week agoBoy who shielded classmate during school shooting receives Medal of Honor
-
Vermont1 week ago
Skier dies after fall at Sugarbush Resort
-
Politics1 week agoTrump’s Ballroom Design Has Barely Been Scrutinized
-
Atlanta, GA7 days agoFetishist ‘No Kings’ protester in mask drags ‘Trump’ and ‘JD Vance’ behind her wheelchair
-
Movie Reviews3 days agoVaazha 2 first half review: Hashir anchors a lively, chaos-filled teen tale