Connect with us

Technology

Why the Microsoft 365 Copilot bug matters for data security

Published

on

Why the Microsoft 365 Copilot bug matters for data security

NEWYou can now listen to Fox News articles!

You trust your email security settings for a reason. So when an AI assistant quietly reads and summarizes messages marked confidential, that trust takes a hit.

Microsoft says a bug in Microsoft 365 Copilot allowed its AI chat feature to process sensitive emails since late January.

The issue bypassed Data Loss Prevention policies that organizations rely on to protect private information. Put simply, emails that were supposed to stay locked down were being summarized anyway.

Sign up for my FREE CyberGuy Report 

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter    

Microsoft 365 Copilot’s work chat interface sits at the center of the issue after a bug allowed it to summarize confidential emails. (Microsoft)

Microsoft 365 Copilot bug summarized confidential emails

Microsoft says a coding error impacted Microsoft 365 Copilot Chat, specifically the “work tab” feature. The AI assistant helps business users summarize content, draft responses and analyze information across Word, Excel, PowerPoint, Outlook and OneNote.

Beginning Jan. 21, an internal bug labeled CW1226324 caused Copilot to read and summarize emails stored in Sent Items and Drafts folders.

The real concern runs deeper. Several of those messages carried confidentiality or sensitivity labels.

Advertisement

Companies apply those labels along with DLP policies to block automated systems from accessing restricted content. Despite those safeguards, Copilot still generated summaries. 

We reached out to Microsoft, and a spokesperson provided CyberGuy with the following statement:

“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers.” 

Why the Microsoft 365 Copilot bug matters for data security

AI tools feel helpful. They save time and reduce busy work. But they also rely on deep access to your data. When safeguards fail, even temporarily, sensitive content can move in ways you did not expect.

YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT

Advertisement

For businesses, that could mean:

Legal discussions summarized outside intended controls

Financial projections processed despite restrictions

HR communications are exposed to automated analysis

Even if no data leaves the organization, the bypass itself raises concerns about how AI integrates with enterprise security systems.

Advertisement

Business users rely on Copilot to streamline work, but a recent bug raised concerns about how it handles sensitive email content. (Microsoft)

How Microsoft is fixing the Microsoft 365 Copilot bug

Microsoft says it began rolling out a fix in early February. The company continues to monitor deployment and is contacting some affected users to verify the fix works.

However, Microsoft has not provided a final timeline for full remediation. It has also not disclosed how many organizations were affected.

The issue is tagged as an advisory, which usually signals limited scope or impact. Still, many security professionals will want deeper clarity before feeling comfortable.

What this Microsoft 365 Copilot issue reveals about AI security

This incident highlights something many companies are wrestling with right now. AI assistants sit inside productivity platforms. They need access to email, documents and collaboration tools to work well.

Advertisement

TIKTOK AFTER THE US SALE: WHAT CHANGED AND HOW TO USE IT SAFELY

At the same time, those platforms contain your most sensitive information. When AI features expand quickly, security policies must evolve just as fast. Otherwise, even a small code mistake can create unexpected exposure.

The Copilot chat feature was designed to boost productivity, yet a code error let it process emails labeled confidential. (Microsoft)

Ways to stay safe after the Microsoft 365 Copilot bug

If your organization uses Microsoft 365 Copilot, here are practical steps to reduce risk:

1) Review Copilot access settings

Work with your IT team to confirm which folders and data sources Copilot can access.

Advertisement

2) Revalidate DLP policies

Test sensitivity labels and DLP (Data Loss Prevention)  rules to ensure they block AI processing as intended.

3) Monitor advisory updates

Stay current on Microsoft service alerts and verify that the fix is fully deployed in your tenant.

4) Limit AI scope during investigations

If you have concerns, consider temporarily restricting Copilot features until verification is complete.

5) Train employees on AI boundaries

Remind staff that AI assistants can process drafts and send messages. Encourage careful handling of sensitive content.

6) Audit Copilot activity logs

Review audit logs to see whether Copilot accessed or summarized labeled emails. This helps determine actual exposure rather than assumed risk.

Advertisement

7) Review sensitivity label configuration

Confirm that confidential labels are configured to block AI processing where required. Misconfigured labels can create gaps even after a bug is fixed.

8) Reassess retention and draft policies

Because the issue involved Sent Items and Drafts, evaluate whether sensitive drafts should be stored long-term or deleted after sending.

9) Limit Copilot to specific user groups

Instead of enabling Copilot organization-wide, consider a phased deployment to departments with lower sensitivity exposure.

10) Conduct a post-incident security review

Use this moment to reassess how AI tools integrate with compliance controls. Treat it as a learning opportunity rather than a one-time glitch.

Pro Tip: This Copilot bug centers on enterprise controls. Even so, AI tools operate on your devices and accounts, so keeping software up to date and using strong antivirus software adds an important layer of defense. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

Advertisement

Considering a more private email provider

Enterprise AI bugs raise a bigger question: how much access should email platforms have to your data in the first place? If you want an added layer of privacy beyond mainstream providers, privacy-focused email services are worth exploring.

Some offer end-to-end encryption, support for PGP encryption and a strict no-ads business model that avoids scanning messages for marketing purposes.

AI WEARABLE HELPS STROKE SURVIVORS SPEAK AGAIN

Many also allow you to create disposable email aliases, which can reduce spam and limit exposure if one address is compromised.

While no provider is immune to software bugs, choosing an email service built around privacy rather than data monetization can limit how much of your information is accessible to automated systems in the first place.

Advertisement

For individuals, journalists and small businesses especially, that added control can make a meaningful difference.

For recommendations on private and secure email providers that offer alias addresses, visit Cyberguy.com

Kurt’s key takeaways

AI assistants are becoming part of daily work life. They promise speed, efficiency and smarter workflows. But convenience should never outrun security.

This Copilot bug may have a limited impact. Still, it serves as a reminder that AI tools are only as strong as the guardrails behind them.

When those guardrails slip, even briefly, sensitive information can move in unexpected ways. As AI becomes more embedded in business software, trust will depend on transparency, fast fixes and clear communication.

Advertisement

Here is the real question: If your AI assistant can see everything you write, are you fully confident it respects every boundary you set? Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

Copyright 2026 CyberGuy.com.  All rights reserved.  

Advertisement

Related Article

149 million passwords exposed in massive credential leak

Technology

Amazon.com says things are fixed after some issues with logging in and checking out

Published

on

Amazon.com says things are fixed after some issues with logging in and checking out

If you were having issues shopping on Amazon or loading your playlists on Amazon Music on Thursday, you weren’t alone. For over three hours today, Downdetector showed a sizable spike in people reporting issues with checkout, search, and logging in. The problem seemed to be affecting both the site and the mobile apps. But an Amazon spokesperson tells The Verge that the issues are now fixed.

“We’re sorry that some customers may have temporarily experienced issues while shopping,” Amazon spokesperson Jennie Bryant says in a statement. “We have resolved the issue, which was related to a software code deployment, and website and app are now running smoothly.”

Several Verge staffers experienced issues themselves when there were problems. Clicking through to many products produced a “sorry, something went wrong” error, and even pages that did load were not showing pricing. Users reported being repeatedly logged out of their accounts when trying to check out or load their cart. Even the parts of Amazon.com that were working seem to be loading slowly.

The company has been dealing with AWS outages in Bahrain and the United Arab Emirates due to drone strikes by the Iranian military, but there has not been any word of more widespread outages in the US or elsewhere.

Update March 5th: Added comment from Amazon saying that things are fixed.

Advertisement
Continue Reading

Technology

$163K in fake medical bill charges; AI uncovers it for you

Published

on

3K in fake medical bill charges; AI uncovers it for you

NEWYou can now listen to Fox News articles!

Last summer, a man’s brother-in-law suffered a fatal heart attack. The hospital bill for four hours of emergency care: $195,628.

The man’s sister-in-law was ready to pay it. He asked her to wait. He requested an itemized bill with CPT codes, the universal billing codes hospitals use, and fed the whole thing into Claude, an AI chatbot.

Within minutes, Claude found duplicate charges, services billed as “inpatient” even though the patient was never admitted, supply costs inflated by 500% to 2,300% above Medicare rates and charges for procedures that never happened. He cross-checked with ChatGPT. Both AIs agreed. He wrote a six-page letter citing every violation by name.

The hospital dropped the bill to $33,000. An 83% reduction. Zero medical training. A $20 app.

Advertisement

A man cross-checked a hospital bill with AI and got it reduced by some 83%. (Neil Godwin/Getty Images)

Your bill is probably wrong, too

That story sounds extreme. It’s not.

The Medical Billing Advocates of America estimates 3 out of 4 medical bills contain errors. The average hospital bill over $10,000 has roughly $1,300 in mistakes. And less than 1% of denied insurance claims are ever appealed. Hospitals and insurers are banking on the fact that you won’t check.

AI flips that equation. You don’t need to understand CPT codes or have a medical billing degree. You just need to paste.

You can use AI platforms, like ChatGPT, to spot errors or suspicious charges on medical bills. (Jaap Arriens/NurPhoto via Getty Images)

Advertisement

The 5-minute audit

Step 1: Call your provider and request an itemized bill with CPT codes. Not the summary. The full line-by-line breakdown. You’re legally entitled to this.

Step 2: Open ChatGPT, Claude, Grok or Gemini (free versions work) and paste this:

“I’m pasting my itemized medical bill below. Please: (1) Explain every charge in plain English, (2) Flag any duplicate or suspicious charges, (3) Compare each charge to average costs, (4) Identify billing code errors or bundling violations, and (5) Draft a dispute letter I can send to the billing department. Here’s my bill:”

Step 3: Paste your bill. The AI will translate every line and tell you what looks wrong.

WOMAN SAYS CHATGPT SAVED HER LIFE BY HELPING DETECT CANCER, WHICH DOCTORS MISSED

Advertisement

If the AI finds errors, call the billing department and ask for a supervisor. (iStock)

Step 4: If the AI finds errors (it probably will), call the billing department and ask for a supervisor. Reference the specific codes. Hospitals resolve disputes all the time when patients show up prepared.

Pro tip: Counterforce Health (counterforcehealth.org) is a free AI tool built specifically for insurance denial appeals. Worth bookmarking.

It’s time to give your medical bills a thorough examination. The AI will see you now.

Real talk. Everybody’s talking about AI. Nobody’s showing you what to actually DO with it. My new free newsletter, Splash of AI (SplashofAI.com), gives you one trick, one tool and one “wait, I can do THAT?” moment every single week. Five minutes. Plain English. The kind of stuff that saves you time, money or both. You’ll wonder how you got by without it.

Advertisement

Send this to someone who is staring at a medical bill they can’t make sense of. Forward this right now. Seriously. This could save them hundreds or even thousands of dollars, and it takes less time than making coffee.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Get tech-smarter. Starting today.

Kim Komando cuts through the tech noise so you don’t have to. Real advice. Zero jargon. Every single day.

Catch the national radio show on 500-plus stations, get the free daily newsletter, watch on YouTube or listen to the podcast wherever you get your shows. It’s all waiting at Komando.com.

Copyright 2026, WestStar Multimedia Entertainment. All rights reserved.

Advertisement

Related Article

ChatGPT could miss your serious medical emergency, new study suggests
Continue Reading

Technology

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Published

on

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Meta’s AI-powered smart glasses could be sending sensitive footage to human reviewers in Nairobi, Kenya, according to an investigation by the Swedish outlets Svenska Dagbladet and Göteborgs-Posten. The report, which was published last week, claims Meta contractors in Kenya have seen videos captured with the smart glasses that show “bathroom visits, sex and other intimate moments.”

So far, at least one proposed class action lawsuit accusing Meta of violating false advertising and privacy laws has emerged in response to Svenska Dagbladet’s reporting, citing the company’s claim that its smart glasses are designed for privacy:

By affirmatively claiming that the Glasses were designed to protect privacy, Meta assumed a duty to disclose material facts that would inform a reasonable consumer’s decision to purchase the product. Instead, Meta hid the alarming reality: that use of the AI features results in a stranger halfway around the world watching the most private moments of a person’s life.

The Nairobi-based contractors interviewed by Svenska Dagbladet are AI annotators, meaning they label images, text, or audio, with the goal of helping AI systems make sense of the data they’re training on. “We see everything — from living rooms to naked bodies,” one worker says, according to Svenska Dagbladet. “Meta has that type of content in its databases.”

A former Meta employee reportedly tells Svenska Dagbladet that faces in annotation data are blurred automatically, though workers in Kenya say this “does not always work as intended,” and some faces are still visible. Another person reportedly tells the outlet that a wearer’s bank cards are sometimes seen in the footage they review as well.

Meta’s Ray-Ban and Oakley smart glasses come with a built-in AI assistant capable of answering questions about what a user can see. The glasses have soared in popularity in recent years, despite growing concerns over privacy and surveillance.

Advertisement

EssilorLuxottica, the eyewear giant that Meta works with to develop the camera-equipped glasses, sold over 7 million of the AI-powered glasses in 2025 — more than tripling its sales in 2023 and 2024 combined. Last year, Meta made some changes to its privacy policy that keep Meta AI with camera use enabled on your glasses “unless you turn off ‘Hey Meta.’” It also stopped allowing wearers to opt out of storing their voice recordings in the cloud.

As reported by Svenska Dagbladet, the Kenya-based AI reviewers work with transcriptions as well, ensuring Meta AI provides the correct answer to the questions users ask aloud. In a statement to The Verge, Meta spokesperson Tracy Clayton says media captured by its smart glasses “stays on the user’s device” unless they choose to share it with other people or Meta.

“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do,” Clayton says. “We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”

Continue Reading

Trending