Technology
Blue Shield exposed 4.7M patients’ health data to Google
Healthcare institutions and insurers arguably collect the most sensitive information about you, including IDs, contact details, addresses and medical records. But they often don’t put in the same level of effort to protect that data.
That’s clear from the growing number of healthcare data breaches we’ve seen recently. In most of those cases, a bad actor was involved.
But in the latest news, health insurance giant Blue Shield of California confirmed that it had been sharing private health data of 4.7 million users with Google for three years without even realizing it.
STAY PROTECTED & INFORMED! GET SECURITY ALERTS & EXPERT TECH TIPS — SIGN UP FOR KURT’S THE CYBERGUY REPORT NOW
A person doing a Google search (Kurt “CyberGuy” Knutsson))
What you need to know
Blue Shield of California just admitted to a major data privacy slip that went on for almost three years, from April 2021 to January 2024. It was using Google Analytics to track how people used its member websites. This is totally normal since every business does it. But the tool was accidentally sharing sensitive info with Google Ads because it wasn’t set up properly.
What I find extremely shocking is that it took the company three years to realize it was sharing its user data with Google to run ads. This says a lot about how much these healthcare giants care about protecting your data.
The shared data included a broad array of protected health information (PHI), including names, zip codes, gender, medical claim dates, online account numbers, insurance plan names, group numbers, family data and even search criteria used in its “Find a Doctor” feature.
“Google may have used this data to conduct focused ad campaigns back to those individual members. We want to reassure our members that no bad actor was involved, and, to our knowledge, Google has not used the information for any purpose other than these ads or shared the protected information with anyone,” the company said in a notice on its website.
This incident is not isolated. Over the past few years, healthcare and tech companies have come under scrutiny for similar missteps. The Federal Trade Commission (FTC) and the Department of Health and Human Services (HHS) have already issued warnings about the use of tracking technologies in healthcare, especially those that might expose patient data to third parties without adequate transparency or safeguards.
A Google spokesperson provided the following comment to CyberGuy when asked about the Blue Shield data breach:
“Businesses, not Google, manage the data they collect and must inform users about its collection and use. By default, any data sent to Google Analytics for measurement does not identify individuals, and we have strict policies against collecting private health information (PHI) or advertising based on sensitive information.”
A person working on their laptop (Kurt “CyberGuy” Knutsson)
MALWARE EXPOSES 3.9 BILLION PASSWORDS IN HUGE CYBERSECURITY THREAT
Impact on patients and the industry
Since the data was only shared with Google and not any other party, the overall risk is relatively low, apart from the clear privacy violation. It’s highly unlikely that anyone else will gain access to it, so the chances of the data being misused are slim. Google says it doesn’t allow ads to be served based on sensitive information like health, so there’s a good chance your data wasn’t even used for advertising.
Blue Shield’s case follows a string of similar breaches. Companies like GoodRx, BetterHelp and Kaiser have all faced regulatory and legal consequences for sharing sensitive user data with advertising vendors. Some even settled for millions of dollars. Despite the risks, many healthcare organizations have continued using these tools due to the lack of clear regulatory guardrails, a situation complicated further by a federal court ruling that blocked the Biden administration’s attempts to curb the use of online trackers in healthcare settings.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
A person working on a laptop (Kurt “CyberGuy” Knutsson)
HOW TO REMOVE YOUR PRIVATE DATA FROM THE INTERNET
How to protect your health data online
The Blue Shield of California incident is a reminder that even well-known healthcare providers can mishandle sensitive data. While you can’t always control what happens behind the scenes, there are steps you can take to reduce your exposure and safeguard your privacy:
1. Limit what you share on health portals: Avoid entering more personal details than absolutely necessary on insurance or provider websites. Tools like “Find a Doctor” might log your search terms, so keep inputs vague when possible.
2. Use privacy-focused browsers: Browsers like Brave or Firefox offer built-in privacy protections, such as blocking third-party trackers that could expose health-related browsing activity.
3. Turn off ad personalization: Visit Google’s Ad Settings and disable ad personalization. This won’t stop tracking, but it can reduce how your data is used for targeting.
4. Opt out of tracking where possible: Many healthcare sites use cookies and tracking tools. Choose “reject all” or the strictest privacy settings in cookie banners. If a tracking opt-out tool is available, use it.
5. Read privacy policies (yes, really): Look for language like “third-party sharing,” “advertising,” or “analytics.” If a healthcare provider mentions tools like Google Analytics or Meta Pixel, that’s a cue to proceed cautiously.
6. Monitor your accounts and credit: Keep an eye out for unusual insurance claims or medical charges. Set up credit alerts or monitoring services if your provider offers them, especially after a breach.
7. Ask questions: Call or email your healthcare provider or insurer. Ask what tracking tools they use and how they protect your data. The more consumers push for transparency, the more pressure there is to improve standards.
Bonus privacy steps (For extra peace of mind)
If you want to go beyond the basics, here are some additional steps that can help reduce your digital footprint and catch misuse early:
Use a personal data removal service: While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here.
Consider identity theft protection services: If you’re concerned about fraud or medical identity theft, you’ll want to consider using identity theft protection services. Identity theft companies can monitor personal information like your Social Security number, phone number and email address and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals.
Use strong antivirus software: To guard against malware or phishing attacks that could compromise access to your online health accounts, be sure to use strong antivirus software. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.
Kurt’s key takeaway
It baffles me how careless most companies are when it comes to protecting user data. Blue Shield “mistakenly” shared your data with Google, which then used it to show personalized ads. It took the company three years to realize this. While most cyber incidents involve an attacker, this breach didn’t need one. We need accountability in data practices, especially when human error or tech oversight can cause damage at scale.
How comfortable are you knowing that your health data might be used to target ads? Let us know by writing us at Cyberguy.com/Contact
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter
Ask Kurt a question or let us know what stories you’d like us to cover
Follow Kurt on his social channels
Answers to the most asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Apple is going high-end with new ‘Ultra’ products next
Fresh off launching the low-cost MacBook Neo, Apple is reportedly preparing at least three new products that will fit into its highest-end “ultra” lineup. According to Bloomberg’s Mark Gruman, the next batch of releases may not bear the “ultra” name, like its Watch, but will all command price premiums over their mainline counterparts.
There’s the oft-rumored foldable iPhone, which is expected to cost around $2,000, and a touchscreen MacBook Pro is supposedly slated for the fall. Those are pretty straightforward plays for the higher end of the market. More interesting are the next-gen AirPods, which are rumored to include cameras to feed visual context to Siri. Since AirPods already use the Pro and Max branding, similar to Apple Silicon, a set of AirPods Ultra could very well be on the docket.
Between the Neo and multiple foldables in the works, it seems that Apple is simultaneously trying to go further up- and down-market.
Technology
Meta smart glasses privacy concerns grow
NEWYou can now listen to Fox News articles!
Smart glasses promise a future where technology blends into everyday life. You can ask a question, snap a quick video or identify what you are looking at in seconds. It sounds convenient. However, a new investigation suggests the experience may come with a privacy tradeoff many users never expected.
According to an investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, contractors reviewing AI data in Nairobi, Kenya, may have seen highly personal footage captured by Meta’s AI-powered smart glasses. In some cases, the videos reportedly showed bathroom visits, sexual activity and other intimate moments.
The allegations have already sparked legal action and renewed debate about how AI systems are trained.
CEO Mark Zuckerberg sported a pair of Meta Ray-Ban Display AI glasses while speaking at an event in Menlo Park, California, on Sept. 17, 2025. (David Paul Morris/Bloomberg via Getty Images)
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
Report claims Meta smart glasses captured private moments
The investigation focused on people who work as AI annotators. These workers review images, video or audio so artificial intelligence systems can better understand what they are processing. In simple terms, they help train the AI. Workers interviewed for the report said they sometimes review video captured by Meta’s smart glasses. According to the investigation, the footage can include extremely personal scenes recorded in everyday environments. One annotator told reporters they see everything from living rooms to naked bodies. Another worker said faces are supposed to be blurred automatically in the footage. However, the blurring reportedly fails at times, leaving some identities visible. In some clips, workers also said they could see credit cards or other sensitive details.
Why human reviewers analyze Meta smart glasses data
Many people assume AI systems learn entirely on their own. In reality, human reviewers often play a major role in training them. AI annotators help label what appears in images, identify spoken words and verify whether an AI response is correct. Without that human input, the system struggles to improve. Meta’s smart glasses include an AI assistant that answers questions about what a user is seeing. For example, a wearer might ask the glasses to identify a landmark or explain what an object is. To make those answers accurate, the system sometimes relies on training data reviewed by humans.
Meta responds to smart glasses privacy concerns
Meta says media captured by its smart glasses remains on the user’s device unless the user chooses to share it.
A Meta spokesperson provided the following statement to CyberGuy:
“Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you. Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”
Ray-Ban Meta glasses include an LED indicator light that activates whenever photos or videos are recorded, helping signal to people nearby that content is being captured. The company’s terms of service also state that users are responsible for following applicable laws and using the glasses in a safe and respectful manner. That includes avoiding activities such as harassment, infringing on privacy rights or recording sensitive information.
Meta has also been in contact with Sama, a company that provides AI data annotation services. According to information shared by Meta, Sama said it is not aware of workflows where sexual or objectionable content is reviewed or where faces or sensitive details remain consistently unblurred. Meta is continuing to investigate the matter.
Meta CEO Mark Zuckerberg appears at the Dirksen Senate Office Building in Washington, D.C., on Jan. 31, 2024, to testify before the Senate Judiciary Committee alongside other social media executives. (Matt McClain/The Washington Post via Getty Images)
Privacy policy changes added to the concern
The controversy arises as Meta has expanded the capabilities of its AI glasses. The glasses, created with eyewear giant EssilorLuxottica, include a camera and an AI assistant that responds to voice questions. Sales have surged. The company reportedly sold more than 7 million pairs in 2025, a dramatic increase compared with earlier years. At the same time, Meta updated its privacy policies. One change keeps the AI camera features active unless users turn off the Hey Meta voice command. Another removes the ability to opt out of storing voice recordings in the cloud. For privacy advocates, those changes make the investigation more troubling.
FACIAL RECOGNITION GLASSES TURN EVERYDAY LIFE INTO CREEPY PRIVACY NIGHTMARE
What this means to you
If you use smart glasses or similar wearable technology, the report highlights an important reality. AI devices often collect more information than people realize. When people share content with AI systems, human reviewers may analyze that material to help improve the technology. That means the footage captured by your device may be seen by someone else during the training process. Wearable cameras also record everyday life, which makes it easy for private or sensitive moments to be captured unintentionally. Even when companies use tools to blur faces or hide identifying details, those systems do not always work perfectly. As a result, personal information can sometimes still appear in the footage. Privacy policies also evolve as companies roll out new AI features. Staying aware of those updates can help you decide how comfortable you are with the technology you are using.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Mark Zuckerberg wears the Meta Ray-Ban Display glasses while speaking at the company’s headquarters in Menlo Park, California, on Sept. 17, 2025. (Reuters/Carlos Barria)
Kurt’s key takeaways
Smart glasses are quickly moving from novelty to everyday gadget. The idea of having AI help you understand the world around you is undeniably appealing. However, the same technology that makes these devices powerful also raises complicated privacy questions. Cameras that are always within reach, AI systems that learn from real-world footage and human reviewers who help train those systems create a chain of data that many users rarely think about. As smart wearables become more common, transparency about how that data is used will matter more than ever.
So here is the bigger question. Would you feel comfortable wearing AI glasses if someone halfway around the world might review the footage your device captures? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Listen to this: Mabe Fratti’s experimental cello pop
The opening notes of “Kravitz”, which kicks off Mabe Fratti’s 2024 record Sentir Que No Sabes, are lodged in my brain permanently. It’s not a showy album, by any means. But there’s something about the buzzing of her cello, plucked as you might an upright bass. The way they ring out before coming to an abrupt stop, fuzz still hanging in the air, set against a simple kick and snare sat firmly in the pocket. There’s something industrial about the way it all comes together, like a jazzy “Closer.”
Then come Fratti’s paranoid lyrics in Spanish about ears in the ceiling and someone listening through the walls, and the slightly atonal horn blasts. In the back half, the arrangement blooms with big piano chords, and the drums pick up steam. It’s the perfect opening to a record that sees Fratti taking her experimental impulses and working them into something that more closely resembles pop music, straying further from her avant-garde roots.
Fratti was born in Guatemala, but operates out of Mexico. She’s told Pitchfork that, as a child, her parents mostly played Christian and classical music around the house. But as a teen, she discovered Limewire and the works of experimental composers like György Ligeti. This more expansive, internet-fed musical diet is on display in tracks like “Pantalla Azul.” It flits about, toying with various styles from goth rock to new age, but always coming back to the strength of Fratti’s melodic instincts. Meanwhile, “Oidos” leans fully into chamber pop, with echoed cello stabs, plaintive trumpet, and what sounds like an autoharp.
Even when the arrangements are stripped down, Sentir Que No Sabes sounds lush and enveloping. It would feel equally at home in a coffee shop or on an arena stage. The production from I. La Católica (Héctor Tosta) is the glue holding together Fratti’s frantic stylistic shifts and jagged cello manipulations. It would be easy for the delicate horns, atonal pizzicato strings, and icy digital synths to sound like several different albums stitched together haphazardly. Instead, the undercurrent of unease and lightly crushed drums form a thread tying all the disparate pieces together.
That’s not to say there aren’t moments of full-on experimental freakouts. Fratti indulges her more abstract musical inclinations on interludes like “Elástica” I and II, but the brilliance of Sentir Que No Sabes is in how it repackages her experimental instincts into something more approachable and downright catchy at times.
A comparison often thrown around when discussing Fratti’s music is Arthur Russell, and it makes sense. Russel was also an avant-garde cellist with surprising pop instincts. But he rarely married those two sides of his music as directly as Fratti does. For the most part, he had pop songs, and he had experimental compositions. Over her last few albums, both as a solo artist and as one half of the duo Titanic, Mabe Fratti has sought to break down those walls.
-
Wisconsin1 week agoSetting sail on iceboats across a frozen lake in Wisconsin
-
Massachusetts6 days agoMassachusetts man awaits word from family in Iran after attacks
-
Maryland1 week agoAM showers Sunday in Maryland
-
Florida1 week agoFlorida man rescued after being stuck in shoulder-deep mud for days
-
Pennsylvania4 days agoPa. man found guilty of raping teen girl who he took to Mexico
-
Oregon1 week ago2026 OSAA Oregon Wrestling State Championship Results And Brackets – FloWrestling
-
News1 week ago2 Survivors Describe the Terror and Tragedy of the Tahoe Avalanche
-
Sports4 days agoKeith Olbermann under fire for calling Lou Holtz a ‘scumbag’ after legendary coach’s death