Connect with us

Technology

UnitedHealth cyberattack exposes 190 million in largest US healthcare data breach

Published

on

UnitedHealth cyberattack exposes 190 million in largest US healthcare data breach

UnitedHealth’s Change Healthcare unit suffered a data breach in February 2024, the news of which surfaced Feb. 21. 

Initially reported to have affected around 100 million individuals, the U.S. health insurance giant has now revealed that the actual number is significantly higher: 190 million. This makes it the largest breach of medical data in U.S. history, affecting nearly half the country’s population. 

A breach of this magnitude can have devastating consequences for the American people as malicious actors could exploit the data for a range of attacks if it finds its way to the dark web.

I’M GIVING AWAY THE LATEST & GREATEST AIRPODS PRO 2

A doctor looking at patient’s private information (Kurt “CyberGuy” Knutsson)

Advertisement

The updated impact assessment

UnitedHealth confirmed on Friday, Jan. 24, 2025, that the ransomware attack on its Change Healthcare unit affected approximately 190 million people in the United States. The company had previously estimated the number of affected individuals to be around 100 million in its preliminary analysis filed with the Office for Civil Rights, a division of the U.S. Department of Health and Human Services that investigates data breaches.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

UnitedHealth stated that the majority of those impacted have already been notified, either directly or through substitute notice. The final tally of affected individuals will be confirmed and submitted to the Office for Civil Rights at a later date.

The company tells CyberGuy it is “not aware of any misuse of individuals’ information as a result of this incident and has not seen electronic medical record databases appear in the data during the analysis.” However, UnitedHealth did not disclose when it became aware of the additional 90 million victims, how the revised figure was determined or what changes led to the updated number.

Illustration of a hacker at work (Kurt “CyberGuy” Knutsson)

Advertisement

THE HIDDEN COSTS OF FREE APPS: YOUR PERSONAL INFORMATION

What you need to know about the data breach

The cyberattack on Change Healthcare in February caused widespread disruptions across the U.S. healthcare sector, as the company took its systems offline to contain the breach. This shutdown impacted critical services such as claims processing, payments and data sharing, which many healthcare providers rely on.

The stolen data varied by individual but included a broad range of personal and sensitive information, such as names, addresses, dates of birth, phone numbers, email addresses and government ID numbers, including Social Security, driver’s license and passport details.

Plus, hackers may have accessed health-related information, including diagnoses, medications, test results, imaging records, care and treatment plans, and health insurance details. Financial and banking information tied to claims and payment data was also reportedly compromised.

The breach was the result of a ransomware attack carried out by ALPHV/BlackCat, a Russian-speaking ransomware and extortion group. The attack, a form of malware intrusion, locks victims out of their data unless a ransom is paid. ALPHV/BlackCat later took credit for the attack.

Advertisement

During a House hearing in April, Change Healthcare admitted that the breach was made possible due to inadequate security measures, specifically the absence of two-factor authentication to protect its systems.

Illustration of a hacker at work (Kurt “CyberGuy” Knutsson)

FROM TIKTOK TO TROUBLE: HOW YOUR ONLINE DATA CAN BE WEAPONIZED AGAINST YOU

6 ways to protect yourself from Change Healthcare data breach

1. Remove your personal information from the internet: The breach has exposed sensitive personal data, making it essential to reduce your online footprint. While no service can guarantee complete data removal, a reputable data removal service can significantly limit your exposure. These services systematically monitor and erase your personal information from numerous websites and data brokers. Check out my top picks for data removal services here.

Advertisement

2. Be wary of mailbox communications: With addresses among the compromised data, scammers may exploit this breach to send fraudulent letters. Be aware of mail claiming missed deliveries, account suspensions or security alerts. Always verify the authenticity of such communications before responding or taking action.

3. Be cautious of phishing attempts and use strong antivirus software: Scammers may use your compromised email or phone number to target you with phishing attacks. Be wary of messages asking for personal information or containing suspicious links. To protect yourself, ensure strong antivirus software is installed on all your devices. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.

4. Monitor your accounts: Given the scope of this breach, regular monitoring of your bank accounts, credit card statements and other financial accounts is critical. Look for unauthorized transactions or suspicious activity and immediately report any issues to your bank or credit card provider.

5. Recognize and report a Social Security scam: If your Social Security number is exposed, you could become a target for related scams. Official communication regarding Social Security issues usually comes via mail, not phone calls or emails. Learn more about spotting and reporting scams by visiting the Social Security Administration’s scam information page.

6. Invest in identity theft protection: Data breaches happen every day, and most never make the headlines, but with an identity theft protection service, you’ll be notified if and when you are affected. Identity theft companies can monitor personal information like your Social Security number, phone number and email address and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.

Advertisement

MASSIVE SECURITY FLAW PUTS MOST POPULAR BROWSERS AT RISK ON MAC

Kurt’s key takeaway

It’s surprising that a company of UnitedHealth’s scale failed to implement even basic cybersecurity measures when handling customer data. A breach affecting 190 million people – nearly half of the U.S. population – is staggering, leaving almost anyone at risk of becoming a target for hackers. While the company is still assessing the full extent of the breach, you can take precautions now by being cautious with any unknown links or unsolicited calls. Bad actors may use a variety of tactics to cause harm.

Do you think these companies are doing enough to protect your data, and is the government doing enough to catch those behind cyberattacks? Let us know by writing us at Cyberguy.com/Contact.

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Advertisement

Ask Kurt a question or let us know what stories you’d like us to cover.

Follow Kurt on his social channels:

Answers to the most asked CyberGuy questions:

New from Kurt:

Copyright 2025 CyberGuy.com. All rights reserved.

Advertisement

Technology

Anthropic upgrades Claude’s memory to attract AI switchers

Published

on

Anthropic upgrades Claude’s memory to attract AI switchers

Anthropic is making it easier to switch to its Claude AI from other chatbots with an update that brings Claude’s memory feature to users on the free plan, along with a new prompt and dedicated tool for importing data from other chatbots. These upgrades could allow users who have been using rivals like OpenAI’s ChatGPT or Google’s Gemini to quickly copy the data their preferred AI has collected on them and bring it over to Anthropic’s chatbot. That way, they don’t have to “start over” teaching Claude the context and history their previous chatbot already knows.

The option to import and export memories from Claude has been available since October, when Anthropic also rolled out the option for users to turn on Claude’s memory. Up until now, the memory feature was only available to users on paid Claude subscriptions, but now all Claude users can turn it on by going into “settings” then “capabilities.” This menu is also where users can find the new memory importing tool, which has users copy a pre-written prompt into their previous AI then copy the output from that prompt back into Claude’s importing tool.

Anthropic is introducing the upgraded memory importing tool as Claude is seeing a rise in popularity, driven by tools like Claude Code and Claude Cowork. Last month, Anthropic launched its new Opus 4.6 and Sonnet 4.6 models, which the company says are better at coding and completing complex tasks like working through a spreadsheet or filling out forms.

Anthropic has also been experiencing a spike in attention recently after pushing back against demands from the Pentagon to loosen the guardrails on its AI models, with the company stating publicly that they drew “red lines” around mass surveillance and fully autonomous lethal weapons.

Continue Reading

Technology

Why the Microsoft 365 Copilot bug matters for data security

Published

on

Why the Microsoft 365 Copilot bug matters for data security

NEWYou can now listen to Fox News articles!

You trust your email security settings for a reason. So when an AI assistant quietly reads and summarizes messages marked confidential, that trust takes a hit.

Microsoft says a bug in Microsoft 365 Copilot allowed its AI chat feature to process sensitive emails since late January.

The issue bypassed Data Loss Prevention policies that organizations rely on to protect private information. Put simply, emails that were supposed to stay locked down were being summarized anyway.

Sign up for my FREE CyberGuy Report 

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter    

Microsoft 365 Copilot’s work chat interface sits at the center of the issue after a bug allowed it to summarize confidential emails. (Microsoft)

Microsoft 365 Copilot bug summarized confidential emails

Microsoft says a coding error impacted Microsoft 365 Copilot Chat, specifically the “work tab” feature. The AI assistant helps business users summarize content, draft responses and analyze information across Word, Excel, PowerPoint, Outlook and OneNote.

Beginning Jan. 21, an internal bug labeled CW1226324 caused Copilot to read and summarize emails stored in Sent Items and Drafts folders.

The real concern runs deeper. Several of those messages carried confidentiality or sensitivity labels.

Advertisement

Companies apply those labels along with DLP policies to block automated systems from accessing restricted content. Despite those safeguards, Copilot still generated summaries. 

We reached out to Microsoft, and a spokesperson provided CyberGuy with the following statement:

“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers.” 

Why the Microsoft 365 Copilot bug matters for data security

AI tools feel helpful. They save time and reduce busy work. But they also rely on deep access to your data. When safeguards fail, even temporarily, sensitive content can move in ways you did not expect.

YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT

Advertisement

For businesses, that could mean:

Legal discussions summarized outside intended controls

Financial projections processed despite restrictions

HR communications are exposed to automated analysis

Even if no data leaves the organization, the bypass itself raises concerns about how AI integrates with enterprise security systems.

Advertisement

Business users rely on Copilot to streamline work, but a recent bug raised concerns about how it handles sensitive email content. (Microsoft)

How Microsoft is fixing the Microsoft 365 Copilot bug

Microsoft says it began rolling out a fix in early February. The company continues to monitor deployment and is contacting some affected users to verify the fix works.

However, Microsoft has not provided a final timeline for full remediation. It has also not disclosed how many organizations were affected.

The issue is tagged as an advisory, which usually signals limited scope or impact. Still, many security professionals will want deeper clarity before feeling comfortable.

What this Microsoft 365 Copilot issue reveals about AI security

This incident highlights something many companies are wrestling with right now. AI assistants sit inside productivity platforms. They need access to email, documents and collaboration tools to work well.

Advertisement

TIKTOK AFTER THE US SALE: WHAT CHANGED AND HOW TO USE IT SAFELY

At the same time, those platforms contain your most sensitive information. When AI features expand quickly, security policies must evolve just as fast. Otherwise, even a small code mistake can create unexpected exposure.

The Copilot chat feature was designed to boost productivity, yet a code error let it process emails labeled confidential. (Microsoft)

Ways to stay safe after the Microsoft 365 Copilot bug

If your organization uses Microsoft 365 Copilot, here are practical steps to reduce risk:

1) Review Copilot access settings

Work with your IT team to confirm which folders and data sources Copilot can access.

Advertisement

2) Revalidate DLP policies

Test sensitivity labels and DLP (Data Loss Prevention)  rules to ensure they block AI processing as intended.

3) Monitor advisory updates

Stay current on Microsoft service alerts and verify that the fix is fully deployed in your tenant.

4) Limit AI scope during investigations

If you have concerns, consider temporarily restricting Copilot features until verification is complete.

5) Train employees on AI boundaries

Remind staff that AI assistants can process drafts and send messages. Encourage careful handling of sensitive content.

6) Audit Copilot activity logs

Review audit logs to see whether Copilot accessed or summarized labeled emails. This helps determine actual exposure rather than assumed risk.

Advertisement

7) Review sensitivity label configuration

Confirm that confidential labels are configured to block AI processing where required. Misconfigured labels can create gaps even after a bug is fixed.

8) Reassess retention and draft policies

Because the issue involved Sent Items and Drafts, evaluate whether sensitive drafts should be stored long-term or deleted after sending.

9) Limit Copilot to specific user groups

Instead of enabling Copilot organization-wide, consider a phased deployment to departments with lower sensitivity exposure.

10) Conduct a post-incident security review

Use this moment to reassess how AI tools integrate with compliance controls. Treat it as a learning opportunity rather than a one-time glitch.

Pro Tip: This Copilot bug centers on enterprise controls. Even so, AI tools operate on your devices and accounts, so keeping software up to date and using strong antivirus software adds an important layer of defense. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

Advertisement

Considering a more private email provider

Enterprise AI bugs raise a bigger question: how much access should email platforms have to your data in the first place? If you want an added layer of privacy beyond mainstream providers, privacy-focused email services are worth exploring.

Some offer end-to-end encryption, support for PGP encryption and a strict no-ads business model that avoids scanning messages for marketing purposes.

AI WEARABLE HELPS STROKE SURVIVORS SPEAK AGAIN

Many also allow you to create disposable email aliases, which can reduce spam and limit exposure if one address is compromised.

While no provider is immune to software bugs, choosing an email service built around privacy rather than data monetization can limit how much of your information is accessible to automated systems in the first place.

Advertisement

For individuals, journalists and small businesses especially, that added control can make a meaningful difference.

For recommendations on private and secure email providers that offer alias addresses, visit Cyberguy.com

Kurt’s key takeaways

AI assistants are becoming part of daily work life. They promise speed, efficiency and smarter workflows. But convenience should never outrun security.

This Copilot bug may have a limited impact. Still, it serves as a reminder that AI tools are only as strong as the guardrails behind them.

When those guardrails slip, even briefly, sensitive information can move in unexpected ways. As AI becomes more embedded in business software, trust will depend on transparency, fast fixes and clear communication.

Advertisement

Here is the real question: If your AI assistant can see everything you write, are you fully confident it respects every boundary you set? Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

Copyright 2026 CyberGuy.com.  All rights reserved.  

Advertisement

Related Article

149 million passwords exposed in massive credential leak
Continue Reading

Technology

Samsung’s Digital Home Key lets you use your phone as your key

Published

on

Samsung’s Digital Home Key lets you use your phone as your key

Just days after showing off the Galaxy S26, Samsung is finally rolling out the ability for users to unlock their home with a tap of their phone or by simply approaching their door. The new feature, called Digital Home Key, will live inside Samsung Wallet and is powered by the Aliro smart home standard.

Samsung first teased its Digital Home Key feature in 2024 and said the feature would be available in 2025. That didn’t pan out, as the CSA’s Aliro standard — which will let users unlock smart locks with any phone — only arrived in February of this year. The new standard uses near-field communication (NFC) for its tap-to-unlock technology. It also supports ultra-wideband (UWB), giving users the ability to unlock their door as they approach and without pulling out their phone.

To add a Digital Home Key to your wallet, you’ll need to set up a compatible smart lock through SmartThings using Matter. Only some Galaxy smartphones support both NFC and UWB, including the Galaxy Z Fold 4 and up, as well as the Galaxy S22 Ultra and up. You can view the full list of compatible devices on Samsung’s website.

Continue Reading

Trending