Technology
Can AI help someone stage a fake kidnapping scam against you or your family?
You may feel confident in your ability to avoid becoming a victim of cyber scams. You know what to look for, and you won’t let someone fool you.
Then you receive a phone call from your son, which is unusual because he rarely calls. You hear a shout and sounds resembling a scuffle, making you take immediate notice. Suddenly, you hear a voice that you are absolutely certain is your son, screaming for help. When the alleged kidnappers come on the line and demand money to keep your son safe, you are sure that everything is real because you heard his voice.
Unfortunately, scammers are using artificial intelligence (AI) to mimic the voices of people, potentially turning these fake voices into things like kidnapping scams. This particular scam seems to be rare, but it’s happening.
CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS AND EASY HOW-TO’S TO MAKE YOU SMARTER
An illustration of a scammer. (Kurt “CyberGuy” Knutsson)
How frequent are fake kidnapping calls enhanced with AI?
Such fake emergency scams occur frequently enough that the Federal Trade Commission (FTC) provided warnings and examples for consumers. Hard numbers that indicate the frequency of these calls aren’t readily available, though, especially for calls known to make use of AI.
Such scams are certainly possible with current AI technology. Fake video and audio of politicians and other famous people are appearing with regularity. Aided by AI, these clips are frighteningly believable.
You may recall the incident in late 2023 involving a fake dental plan advertisement that featured Tom Hanks. AI technology created the video. Hanks had to make a social media post calling out the fake advertisement.
Empty warehouse with a chair. (Kurt “CyberGuy” Knutsson)
MORE: THE ‘UNSUBSCRIBE’ EMAIL SCAM IS TARGETING AMERICANS
How does an AI fake call work?
The AI technology creates a fake by analyzing a sampling of an audio clip of the person it wants to mimic. It uses its ability to interpret incredible amounts of data to take note of multiple characteristics of the person’s voice, allowing it to make a highly realistic fake.
Once the AI is able to create the fake audio, programmers then tell it what to say, creating a personalized message designed to sell dental plans or to convince you that your loved one is in trouble with kidnappers.
Some AI programmers that use the fake audio for helpful purposes — such as for allowing people with medical problems like ALS to regain their “speech” — claim they can mimic a voice with as little as a few minutes of audio clips. However, the more audio that’s available, the more realistic the mimicked voice should sound. Twenty minutes of audio is far better than three, for example.
As AI’s capabilities continue to expand at breakneck speed, you can expect the time requirements to shrink in future years.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
An illustration of artificial intelligence. (Kurt “CyberGuy” Knutsson)
MORE: HOW TO GUARD AGAINST BRUSHING SCAMS
Do I have to worry about falling for a fake AI audio kidnapping scheme?
Realistically, the vast majority of people don’t have to worry about a fake kidnapping scheme that originates from AI-generated audio. If your loved one has a lot of video and audio on social media, though, the scammers may be able to find enough source audio to create a realistic fake.
Even though AI makes this type of scam easier to perform, the setup process still remains too time-consuming for most scammers. After all, scammers in this type of scheme are relying on your rapidly expanding fear at receiving this type of call to cause you to miss obvious clues that would tell you it’s a fake.
The scammers may simply have a random child scream and sob uncontrollably, while allowing you to rapidly jump to the conclusion that it’s your child. This is far easier than using AI to try to source and generate audio … at least for now.
A woman surrounded by data. (Kurt “CyberGuy” Knutsson)
MORE: HOW SCAMMERS USE AI TOOLS TO FILE PERFECT-LOOKING TAX RETURNS IN YOUR NAME
Steps you can take to protect yourself from a fake kidnapping scam
Even though the scammers try to gain the upper hand with the suddenness of the fake kidnapping call and by catching you off guard, you have some steps you can take before and after you receive this type of call to prepare and protect yourself.
1. Ask your loved ones to keep you informed about trips: Fake kidnappers may try to convince you that the abduction is taking place outside your city. However, if you know that your loved one did not leave town, you can be confident that the call is probably a fake.
2. Set up a safe word or phrase: Set up a safe word that your loved ones should use if they ever are calling you because of a dangerous situation or because they are under duress. A scammer is not going to know this safe word. If you don’t hear the safe word, you know it’s probably a fake call.
3. Use privacy settings on social media: Ask your family members to limit who can see their social media posts. This would make it harder for a scammer to obtain source audio that’s usable in a fake kidnapping audio call. For more information on maintaining and protecting your online privacy, click here.
4. Try to text your loved one: Either during or immediately after the call, send a text message to your loved one without telling the caller. Ask your loved one to text you back immediately, so you can converse without tipping off the scammers. If you receive a text back, you can be confident the call is a fake. Consider creating a code word that you can use with the entire family. When you send this code word in a text, everyone knows it’s a serious situation that requires an immediate response.
5. Stay calm and think things through: Finally, although it is incredibly difficult to stay calm when you receive this kind of call, it’s important to keep thinking clearly. Do not panic. Regardless of whether it’s a real call or a scam call, panicking is never going to help. Listen for clues that make it obvious the call is a scam. Try to gather some information that can help you make a clear-headed judgment about the legitimacy of the call.
Kurt’s key takeaways
As AI continues to become more readily available and gains sophistication, scammers will be ready to take advantage of it. Perhaps by then, AI will even the playing field by coming up with ways to help us protect ourselves. Until then, taking steps to protect your family, such as by setting up a safe word, can give you some peace of mind.
Are you concerned about how scammers may take advantage of AI to create new scams? Let us know by writing us at Cyberguy.com/Contact
For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter
Ask Kurt a question or let us know what stories you’d like us to cover.
Answers to the most-asked CyberGuy questions:
Copyright 2024 CyberGuy.com. All rights reserved.
Technology
Anthropic upgrades Claude’s memory to attract AI switchers
Anthropic is making it easier to switch to its Claude AI from other chatbots with an update that brings Claude’s memory feature to users on the free plan, along with a new prompt and dedicated tool for importing data from other chatbots. These upgrades could allow users who have been using rivals like OpenAI’s ChatGPT or Google’s Gemini to quickly copy the data their preferred AI has collected on them and bring it over to Anthropic’s chatbot. That way, they don’t have to “start over” teaching Claude the context and history their previous chatbot already knows.
The option to import and export memories from Claude has been available since October, when Anthropic also rolled out the option for users to turn on Claude’s memory. Up until now, the memory feature was only available to users on paid Claude subscriptions, but now all Claude users can turn it on by going into “settings” then “capabilities.” This menu is also where users can find the new memory importing tool, which has users copy a pre-written prompt into their previous AI then copy the output from that prompt back into Claude’s importing tool.
Anthropic is introducing the upgraded memory importing tool as Claude is seeing a rise in popularity, driven by tools like Claude Code and Claude Cowork. Last month, Anthropic launched its new Opus 4.6 and Sonnet 4.6 models, which the company says are better at coding and completing complex tasks like working through a spreadsheet or filling out forms.
Anthropic has also been experiencing a spike in attention recently after pushing back against demands from the Pentagon to loosen the guardrails on its AI models, with the company stating publicly that they drew “red lines” around mass surveillance and fully autonomous lethal weapons.
Technology
Why the Microsoft 365 Copilot bug matters for data security
NEWYou can now listen to Fox News articles!
You trust your email security settings for a reason. So when an AI assistant quietly reads and summarizes messages marked confidential, that trust takes a hit.
Microsoft says a bug in Microsoft 365 Copilot allowed its AI chat feature to process sensitive emails since late January.
The issue bypassed Data Loss Prevention policies that organizations rely on to protect private information. Put simply, emails that were supposed to stay locked down were being summarized anyway.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
Microsoft 365 Copilot’s work chat interface sits at the center of the issue after a bug allowed it to summarize confidential emails. (Microsoft)
Microsoft 365 Copilot bug summarized confidential emails
Microsoft says a coding error impacted Microsoft 365 Copilot Chat, specifically the “work tab” feature. The AI assistant helps business users summarize content, draft responses and analyze information across Word, Excel, PowerPoint, Outlook and OneNote.
Beginning Jan. 21, an internal bug labeled CW1226324 caused Copilot to read and summarize emails stored in Sent Items and Drafts folders.
The real concern runs deeper. Several of those messages carried confidentiality or sensitivity labels.
Companies apply those labels along with DLP policies to block automated systems from accessing restricted content. Despite those safeguards, Copilot still generated summaries.
We reached out to Microsoft, and a spokesperson provided CyberGuy with the following statement:
“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers.”
Why the Microsoft 365 Copilot bug matters for data security
AI tools feel helpful. They save time and reduce busy work. But they also rely on deep access to your data. When safeguards fail, even temporarily, sensitive content can move in ways you did not expect.
YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT
For businesses, that could mean:
Legal discussions summarized outside intended controls
Financial projections processed despite restrictions
HR communications are exposed to automated analysis
Even if no data leaves the organization, the bypass itself raises concerns about how AI integrates with enterprise security systems.
Business users rely on Copilot to streamline work, but a recent bug raised concerns about how it handles sensitive email content. (Microsoft)
How Microsoft is fixing the Microsoft 365 Copilot bug
Microsoft says it began rolling out a fix in early February. The company continues to monitor deployment and is contacting some affected users to verify the fix works.
However, Microsoft has not provided a final timeline for full remediation. It has also not disclosed how many organizations were affected.
The issue is tagged as an advisory, which usually signals limited scope or impact. Still, many security professionals will want deeper clarity before feeling comfortable.
What this Microsoft 365 Copilot issue reveals about AI security
This incident highlights something many companies are wrestling with right now. AI assistants sit inside productivity platforms. They need access to email, documents and collaboration tools to work well.
TIKTOK AFTER THE US SALE: WHAT CHANGED AND HOW TO USE IT SAFELY
At the same time, those platforms contain your most sensitive information. When AI features expand quickly, security policies must evolve just as fast. Otherwise, even a small code mistake can create unexpected exposure.
The Copilot chat feature was designed to boost productivity, yet a code error let it process emails labeled confidential. (Microsoft)
Ways to stay safe after the Microsoft 365 Copilot bug
If your organization uses Microsoft 365 Copilot, here are practical steps to reduce risk:
1) Review Copilot access settings
Work with your IT team to confirm which folders and data sources Copilot can access.
2) Revalidate DLP policies
Test sensitivity labels and DLP (Data Loss Prevention) rules to ensure they block AI processing as intended.
3) Monitor advisory updates
Stay current on Microsoft service alerts and verify that the fix is fully deployed in your tenant.
4) Limit AI scope during investigations
If you have concerns, consider temporarily restricting Copilot features until verification is complete.
5) Train employees on AI boundaries
Remind staff that AI assistants can process drafts and send messages. Encourage careful handling of sensitive content.
6) Audit Copilot activity logs
Review audit logs to see whether Copilot accessed or summarized labeled emails. This helps determine actual exposure rather than assumed risk.
7) Review sensitivity label configuration
Confirm that confidential labels are configured to block AI processing where required. Misconfigured labels can create gaps even after a bug is fixed.
8) Reassess retention and draft policies
Because the issue involved Sent Items and Drafts, evaluate whether sensitive drafts should be stored long-term or deleted after sending.
9) Limit Copilot to specific user groups
Instead of enabling Copilot organization-wide, consider a phased deployment to departments with lower sensitivity exposure.
10) Conduct a post-incident security review
Use this moment to reassess how AI tools integrate with compliance controls. Treat it as a learning opportunity rather than a one-time glitch.
Pro Tip: This Copilot bug centers on enterprise controls. Even so, AI tools operate on your devices and accounts, so keeping software up to date and using strong antivirus software adds an important layer of defense. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
Considering a more private email provider
Enterprise AI bugs raise a bigger question: how much access should email platforms have to your data in the first place? If you want an added layer of privacy beyond mainstream providers, privacy-focused email services are worth exploring.
Some offer end-to-end encryption, support for PGP encryption and a strict no-ads business model that avoids scanning messages for marketing purposes.
AI WEARABLE HELPS STROKE SURVIVORS SPEAK AGAIN
Many also allow you to create disposable email aliases, which can reduce spam and limit exposure if one address is compromised.
While no provider is immune to software bugs, choosing an email service built around privacy rather than data monetization can limit how much of your information is accessible to automated systems in the first place.
For individuals, journalists and small businesses especially, that added control can make a meaningful difference.
For recommendations on private and secure email providers that offer alias addresses, visit Cyberguy.com
Kurt’s key takeaways
AI assistants are becoming part of daily work life. They promise speed, efficiency and smarter workflows. But convenience should never outrun security.
This Copilot bug may have a limited impact. Still, it serves as a reminder that AI tools are only as strong as the guardrails behind them.
When those guardrails slip, even briefly, sensitive information can move in unexpected ways. As AI becomes more embedded in business software, trust will depend on transparency, fast fixes and clear communication.
Here is the real question: If your AI assistant can see everything you write, are you fully confident it respects every boundary you set? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Samsung’s Digital Home Key lets you use your phone as your key
Just days after showing off the Galaxy S26, Samsung is finally rolling out the ability for users to unlock their home with a tap of their phone or by simply approaching their door. The new feature, called Digital Home Key, will live inside Samsung Wallet and is powered by the Aliro smart home standard.
Samsung first teased its Digital Home Key feature in 2024 and said the feature would be available in 2025. That didn’t pan out, as the CSA’s Aliro standard — which will let users unlock smart locks with any phone — only arrived in February of this year. The new standard uses near-field communication (NFC) for its tap-to-unlock technology. It also supports ultra-wideband (UWB), giving users the ability to unlock their door as they approach and without pulling out their phone.
To add a Digital Home Key to your wallet, you’ll need to set up a compatible smart lock through SmartThings using Matter. Only some Galaxy smartphones support both NFC and UWB, including the Galaxy Z Fold 4 and up, as well as the Galaxy S22 Ultra and up. You can view the full list of compatible devices on Samsung’s website.
-
World5 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts6 days agoMother and daughter injured in Taunton house explosion
-
Denver, CO6 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Louisiana1 week agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Technology1 week agoYouTube TV billing scam emails are hitting inboxes
-
Politics1 week agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT
-
Technology1 week agoStellantis is in a crisis of its own making
-
Oregon4 days ago2026 OSAA Oregon Wrestling State Championship Results And Brackets – FloWrestling