Connect with us

Technology

The sickening truth: Healthcare data breaches reach all-time high

Published

on

The sickening truth: Healthcare data breaches reach all-time high

If your healthcare data hasn’t been breached in 2024, then you either don’t know it yet or should consider yourself very lucky. 

That’s because 2024 was a nightmare year for healthcare institutions and patients in the U.S. A total of 184,111,469 records were breached. That’s 53% of the 2024 population of the United States. 

This staggering figure represents a significant increase from previous years, setting a new and alarming record in healthcare data breaches. 

The healthcare sector faced unprecedented challenges in cybersecurity, with attacks becoming more frequent, sophisticated and damaging than ever before.

I’M GIVING AWAY THE LATEST & GREATEST AIRPODS PRO 2

Advertisement

Illustration of a hacker at work stealing healthcare data (Kurt “CyberGuy” Knutsson)

Health check or data leak?

Being admitted to a hospital is stressful enough. It caused additional stress for the 100 million clients of Change Healthcare, whose data was exposed following a breach orchestrated by the BlackCat ransomware group. Not only did the breach expose sensitive health information, but it also caused widespread disruptions in claims processing. Patients and providers across the country faced chaos as the breach impacted their ability to access and pay for healthcare services.

The second significant breach occurred at Kaiser Foundation Health Plan, where the personal data of 13.4 million individuals was compromised. This breach involved unauthorized access and the use of tracking technologies that transmitted user interactions to third parties. 

Illustration of healthcare data needing to be locked up (Kurt “CyberGuy” Knutsson)

HACKERS CLAIM MASSIVE BREACH OF COMPANY THAT TRACKS AND SELLS AMERICANS’ LOCATION DATA

Advertisement

Your health data gets breached, so what?

You’ll receive a notification letter, although be aware that it may take months before it reaches you (as was the case for victims of the Ascension Health data breach). The consequences are real and can be very painful. Medical identity theft directly affects patients’ health and safety. It happens when criminals use stolen personal health information to obtain medical services or medications under another person’s name. It can result in incorrect medical records being created that can include inaccurate diagnoses, allergies or treatments. 

And as you may have guessed, it can also result in financial repercussions, such as patients getting fraudulent claims and bills for services they did not receive. Resolving these issues with insurers and healthcare providers takes time and mental strength. And you’re probably not in a hurry to see your breached healthcare provider ever again. That’s normal. A study has shown that up to 54% of patients consider switching providers after a data breach.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

A doctor looking at healthcare data on a screen (Kurt “CyberGuy” Knutsson)

ARE DATA BROKERS ENDANGERING YOUR RETIREMENT SECURITY?

Advertisement

When health data gets into the hands of data brokers

Sensitive health information can easily be combined with personal identifiers from data brokers, creating comprehensive profiles that criminals can exploit. As a reminder, data brokers are companies that specialize in collecting, processing and selling personal information from various sources, including public records, online activities and social media. 

They aggregate this data to create detailed consumer profiles that can be sold to marketers, insurance companies and other entities for various purposes. The more detailed the profile, the higher the chance of identity theft and potential discrimination in employment and insurance. Employers might make hiring decisions based on perceived health risks, while insurers could deny coverage or increase premiums.

A doctor and patient in a healthcare facility (Kurt “CyberGuy” Knutsson)

THE HIDDEN COSTS OF FREE APPS: YOUR PERSONAL INFORMATION

Advertisement

Wash your hands, remove your data

You can’t prevent a data breach, but you can minimize its consequences by reducing your digital footprint overall.

1. Set your social media to private: Restrict access to your personal information and limit what strangers can see about your life and potentially your health status. Ensure your privacy settings are robust and regularly updated to prevent unauthorized data collection.

2. Remove your personal data from data brokers’ databases: Either by searching for your name on people search sites and requesting removals, one by one, or by using a data removal service. Data removal services automate data removal for you and let you track where exactly your data has been found and whether it was removed, not only on people search sites, which are public data brokers, but also on hidden, private databases where you can’t look yourself up (and these are the worst).

Once your data is removed, data removal services monitor data brokers for your data and remove it again as needed (because it has a tendency to be re-listed after a while). This way, you prevent data broker companies from compiling a full profile on you and selling it to the first bidder, whether that’s a hacker, a marketing agency or an insurance company. Check out my top picks for data removal services here.

3. Delete all unused apps on your phone: Unused applications can be hidden gateways for data leakage and potential security vulnerabilities. Regularly audit and remove apps that you no longer use or need.

Advertisement

4. Check the permissions of the ones you want to keep: Review each app’s access to your personal data, location and device features to ensure you’re not inadvertently sharing more information than necessary. Be particularly cautious with health and fitness tracking applications.

5. Use a VPN (virtual private network) when browsing: Encrypt your online activities and mask your digital location to add an extra layer of anonymity and protection. A reliable VPN can help shield your personal information from potential interceptors and data miners. For the best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android and iOS devices.

Kurt’s key takeaways

The reality of healthcare data breaches is daunting, but it’s not entirely out of your control. While you can’t prevent breaches from happening, you can take steps to minimize the risks and protect your personal information. Think of it as adding locks to your digital doors: set your social media to private, use a VPN and clean up unused apps. Remember, the less information you leave out there, the harder it is for bad actors to exploit it. Stay vigilant and don’t let your data become someone else’s advantage.

How do you feel about the growing risks to your personal information, and what steps have you taken to protect your data? Let us know by writing us at Cyberguy.com/Contact.

Advertisement

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Ask Kurt a question or let us know what stories you’d like us to cover.

Follow Kurt on his social channels:

Answers to the most asked CyberGuy questions:

New from Kurt:

Advertisement

Copyright 2025 CyberGuy.com. All rights reserved.

Technology

Apple is going high-end with new ‘Ultra’ products next

Published

on

Apple is going high-end with new ‘Ultra’ products next

Fresh off launching the low-cost MacBook Neo, Apple is reportedly preparing at least three new products that will fit into its highest-end “ultra” lineup. According to Bloomberg’s Mark Gruman, the next batch of releases may not bear the “ultra” name, like its Watch, but will all command price premiums over their mainline counterparts.

There’s the oft-rumored foldable iPhone, which is expected to cost around $2,000, and a touchscreen MacBook Pro is supposedly slated for the fall. Those are pretty straightforward plays for the higher end of the market. More interesting are the next-gen AirPods, which are rumored to include cameras to feed visual context to Siri. Since AirPods already use the Pro and Max branding, similar to Apple Silicon, a set of AirPods Ultra could very well be on the docket.

Between the Neo and multiple foldables in the works, it seems that Apple is simultaneously trying to go further up- and down-market.

Continue Reading

Technology

Meta smart glasses privacy concerns grow

Published

on

Meta smart glasses privacy concerns grow

NEWYou can now listen to Fox News articles!

Smart glasses promise a future where technology blends into everyday life. You can ask a question, snap a quick video or identify what you are looking at in seconds. It sounds convenient. However, a new investigation suggests the experience may come with a privacy tradeoff many users never expected.

According to an investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, contractors reviewing AI data in Nairobi, Kenya, may have seen highly personal footage captured by Meta’s AI-powered smart glasses. In some cases, the videos reportedly showed bathroom visits, sexual activity and other intimate moments.

The allegations have already sparked legal action and renewed debate about how AI systems are trained.

CEO Mark Zuckerberg sported a pair of Meta Ray-Ban Display AI glasses while speaking at an event in Menlo Park, California, on Sept. 17, 2025. (David Paul Morris/Bloomberg via Getty Images)

Advertisement

Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter     

Report claims Meta smart glasses captured private moments

The investigation focused on people who work as AI annotators. These workers review images, video or audio so artificial intelligence systems can better understand what they are processing. In simple terms, they help train the AI. Workers interviewed for the report said they sometimes review video captured by Meta’s smart glasses. According to the investigation, the footage can include extremely personal scenes recorded in everyday environments. One annotator told reporters they see everything from living rooms to naked bodies. Another worker said faces are supposed to be blurred automatically in the footage. However, the blurring reportedly fails at times, leaving some identities visible. In some clips, workers also said they could see credit cards or other sensitive details.

Why human reviewers analyze Meta smart glasses data

Many people assume AI systems learn entirely on their own. In reality, human reviewers often play a major role in training them. AI annotators help label what appears in images, identify spoken words and verify whether an AI response is correct. Without that human input, the system struggles to improve. Meta’s smart glasses include an AI assistant that answers questions about what a user is seeing. For example, a wearer might ask the glasses to identify a landmark or explain what an object is. To make those answers accurate, the system sometimes relies on training data reviewed by humans.

Meta responds to smart glasses privacy concerns

Meta says media captured by its smart glasses remains on the user’s device unless the user chooses to share it.

A Meta spokesperson provided the following statement to CyberGuy:

Advertisement

Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you. Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”

Ray-Ban Meta glasses include an LED indicator light that activates whenever photos or videos are recorded, helping signal to people nearby that content is being captured. The company’s terms of service also state that users are responsible for following applicable laws and using the glasses in a safe and respectful manner. That includes avoiding activities such as harassment, infringing on privacy rights or recording sensitive information.

Meta has also been in contact with Sama, a company that provides AI data annotation services. According to information shared by Meta, Sama said it is not aware of workflows where sexual or objectionable content is reviewed or where faces or sensitive details remain consistently unblurred. Meta is continuing to investigate the matter.

Meta CEO Mark Zuckerberg appears at the Dirksen Senate Office Building in Washington, D.C., on Jan. 31, 2024, to testify before the Senate Judiciary Committee alongside other social media executives. (Matt McClain/The Washington Post via Getty Images)

Privacy policy changes added to the concern

The controversy arises as Meta has expanded the capabilities of its AI glasses. The glasses, created with eyewear giant EssilorLuxottica, include a camera and an AI assistant that responds to voice questions. Sales have surged. The company reportedly sold more than 7 million pairs in 2025, a dramatic increase compared with earlier years. At the same time, Meta updated its privacy policies. One change keeps the AI camera features active unless users turn off the Hey Meta voice command. Another removes the ability to opt out of storing voice recordings in the cloud. For privacy advocates, those changes make the investigation more troubling.

Advertisement

FACIAL RECOGNITION GLASSES TURN EVERYDAY LIFE INTO CREEPY PRIVACY NIGHTMARE

What this means to you

If you use smart glasses or similar wearable technology, the report highlights an important reality. AI devices often collect more information than people realize. When people share content with AI systems, human reviewers may analyze that material to help improve the technology. That means the footage captured by your device may be seen by someone else during the training process. Wearable cameras also record everyday life, which makes it easy for private or sensitive moments to be captured unintentionally. Even when companies use tools to blur faces or hide identifying details, those systems do not always work perfectly. As a result, personal information can sometimes still appear in the footage. Privacy policies also evolve as companies roll out new AI features. Staying aware of those updates can help you decide how comfortable you are with the technology you are using.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com       

Mark Zuckerberg wears the Meta Ray-Ban Display glasses while speaking at the company’s headquarters in Menlo Park, California, on Sept. 17, 2025. (Reuters/Carlos Barria)

Kurt’s key takeaways

Smart glasses are quickly moving from novelty to everyday gadget. The idea of having AI help you understand the world around you is undeniably appealing. However, the same technology that makes these devices powerful also raises complicated privacy questions. Cameras that are always within reach, AI systems that learn from real-world footage and human reviewers who help train those systems create a chain of data that many users rarely think about. As smart wearables become more common, transparency about how that data is used will matter more than ever.

Advertisement

So here is the bigger question. Would you feel comfortable wearing AI glasses if someone halfway around the world might review the footage your device captures? Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter 

Copyright 2026 CyberGuy.com.  All rights reserved.  

Advertisement

Related Article

Meta unveils new AR glasses with heart rate monitoring
Continue Reading

Technology

Listen to this: Mabe Fratti’s experimental cello pop

Published

on

Listen to this: Mabe Fratti’s experimental cello pop

The opening notes of “Kravitz”, which kicks off Mabe Fratti’s 2024 record Sentir Que No Sabes, are lodged in my brain permanently. It’s not a showy album, by any means. But there’s something about the buzzing of her cello, plucked as you might an upright bass. The way they ring out before coming to an abrupt stop, fuzz still hanging in the air, set against a simple kick and snare sat firmly in the pocket. There’s something industrial about the way it all comes together, like a jazzy “Closer.”

Then come Fratti’s paranoid lyrics in Spanish about ears in the ceiling and someone listening through the walls, and the slightly atonal horn blasts. In the back half, the arrangement blooms with big piano chords, and the drums pick up steam. It’s the perfect opening to a record that sees Fratti taking her experimental impulses and working them into something that more closely resembles pop music, straying further from her avant-garde roots.

Fratti was born in Guatemala, but operates out of Mexico. She’s told Pitchfork that, as a child, her parents mostly played Christian and classical music around the house. But as a teen, she discovered Limewire and the works of experimental composers like György Ligeti. This more expansive, internet-fed musical diet is on display in tracks like “Pantalla Azul.” It flits about, toying with various styles from goth rock to new age, but always coming back to the strength of Fratti’s melodic instincts. Meanwhile, “Oidos” leans fully into chamber pop, with echoed cello stabs, plaintive trumpet, and what sounds like an autoharp.

Even when the arrangements are stripped down, Sentir Que No Sabes sounds lush and enveloping. It would feel equally at home in a coffee shop or on an arena stage. The production from I. La Católica (Héctor Tosta) is the glue holding together Fratti’s frantic stylistic shifts and jagged cello manipulations. It would be easy for the delicate horns, atonal pizzicato strings, and icy digital synths to sound like several different albums stitched together haphazardly. Instead, the undercurrent of unease and lightly crushed drums form a thread tying all the disparate pieces together.

That’s not to say there aren’t moments of full-on experimental freakouts. Fratti indulges her more abstract musical inclinations on interludes like “Elástica” I and II, but the brilliance of Sentir Que No Sabes is in how it repackages her experimental instincts into something more approachable and downright catchy at times.

Advertisement

A comparison often thrown around when discussing Fratti’s music is Arthur Russell, and it makes sense. Russel was also an avant-garde cellist with surprising pop instincts. But he rarely married those two sides of his music as directly as Fratti does. For the most part, he had pop songs, and he had experimental compositions. Over her last few albums, both as a solo artist and as one half of the duo Titanic, Mabe Fratti has sought to break down those walls.

Continue Reading

Trending