Technology
AI cybersecurity risks and deepfake scams on the rise
Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it’s not really them. It’s a deepfake, powered by AI, and you’re the target of a sophisticated scam. These kinds of attacks are happening right now, and they’re getting more convincing every day.
That’s the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference (RSAC), one of the world’s biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.
From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that’s touching more lives than ever before.
Join The FREE CyberGuy Report: Get my expert tech tips, critical security alerts, and exclusive deals – plus instant access to my free Ultimate Scam Survival Guide when you sign up!
Illustration of cybersecurity risks. (Kurt “CyberGuy” Knutsson)
AI tools are leaking sensitive data
One of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.
This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.
Deepfake scams are now real-time and multilingual
AI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.
Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders.
Illustration of a person video conferencing on their laptop. (Kurt “CyberGuy” Knutsson)
AI is running phishing and scam operations at scale
Social engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.
Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around $500 per month, making it widely accessible to bad actors.
Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.
AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like “Time is running out” might be reworded as “The hourglass is nearly empty for you,” making the message feel more personal and urgent while also avoiding detection.
By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort.
Stolen AI accounts are sold on the dark web
With AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation.
WHAT TO DO IF YOUR PERSONAL INFORMATION IS ON THE DARK WEB
Illustration of a person signing into their laptop. (Kurt “CyberGuy” Knutsson)
MALWARE STEALS BANK CARDS AND PASSWORDS FROM MILLIONS OF DEVICES
Jailbreaking AI is now a common tactic
Criminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:
- Telling the AI to pretend it is a fictional character that has no rules or limitations
- Phrasing dangerous questions as academic or research-related scenarios
- Asking for technical instructions using less obvious wording so the request doesn’t get flagged
Some AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.
AI-generated malware is entering the mainstream
AI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..
Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for “text recognition” to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.
Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.
Get a free scan to find out if your personal information is already out on the web
Poisoned AI models are spreading misinformation
Sometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:
- Training poisoning: Attackers sneak false or harmful data into the model during development
- Retrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answers
In 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.
A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time.
Illustration of a hacker at work (Kurt “CyberGuy” Knutsson)
HOW SCAMMERS USE AI TOOLS TO FILE PERFECT-LOOKING TAX RETURNS IN YOUR NAME
How to protect yourself from AI-driven cyber threats
AI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:
1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.
2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.
3) Turn on two-factor authentication (2FA): 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.
4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.
5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap – and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here.
6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.
7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI.
8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here.
9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit.
Kurt’s key takeaways
Cybercriminals are now using AI to power some of the most convincing and scalable attacks we’ve ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it’s more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust.
Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Cyberguy.com/Contact. Your story could help someone else stay safe.
For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter
Ask Kurt a question or let us know what stories you’d like us to cover
Follow Kurt on his social channels
Answers to the most asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
AI photo match reunites Texas woman with lost cat after 103 days
NEWYou can now listen to Fox News articles!
Holiday gatherings and year-end travel often lead to a spike in missing pets. Doors open more often, routines shift and animals can slip outside in a moment of confusion.
New Year’s Eve creates loud fireworks, and shelters report some of their busiest nights of the entire year. Amid all that, one Texas family just experienced a heartwarming reunion thanks to an AI photo matching on Petco Love Lost.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
FIND A LOST PHONE THAT IS OFF OR DEAD
An AI photo-matching on Petco Love Lost helped reunite a Texas family with their missing cat after 103 days. (ULISES RUIZ/AFP via Getty Images)
How AI photo matching kept the search going
Pam’s 11-year-old indoor cat, Grayson, had never been outside alone. She believes he slipped out while she unloaded groceries at their home in Plano, Texas. The moment she realized he was gone, she acted fast.
She said, “We went up and down the streets day and night. We went online in the neighborhood and on Love Lost. We put up flyers all over the neighborhood. Friends and neighbors were looking for him. I went to the animal shelter, posted him there, and went every day for over a month, hoping to find him.”
Pam uploaded Grayson’s photo to Petco Love Lost right away. She checked her daily match alerts and hoped she would see his familiar face pop up. She told CyberGuy, “I received match alerts almost every day from Lost Love, but never saw Grayson. His profile had been on their site for over 90 days.”
The moment everything changed
Missy, a nearby resident, spotted a thin cat in an alley near her home. She brought him inside, took a picture of him and then turned to Love Lost to see if anyone had reported a missing cat like him.
Missy explained how simple the process felt. “I used Lost Love to reunite them,” she said. “I uploaded a photo of the cat that we found, and it was matched through AI with the photo that the owner uploaded.”
She soon received an AI match alert and learned that the cross street Grayson’s owner, Pam, had listed in her lost post was only a mile from her home. Missy contacted Pam right away.
That message changed everything. “I am sure that if we had not posted his picture and enabled the ability to match the images, we would never have known what happened to Grayson,” Pam said. “And we would not have connected with Missy.”
AI TECH HELPS A SENIOR REUNITE WITH HER CAT AFTER 11 DAYS
Grayson, an indoor cat from Plano, Texas, was finally found thanks to a neighbor who uploaded his photo to an AI search tool. (DANIEL PERRON/Hans Lucas/AFP via Getty Images)
A long road for an aging cat
Grayson is almost 12 and has never lived outdoors. That made this reunion feel even more emotional, Pam said.
“I am still amazed at Grayson’s journey,” she added. “I look at him and cannot believe he made it through those 103 days. He is almost 12 years old, so he is not a young kitty.”
Pam said she still thinks about what those months were like for him. “[I] guess I will always wonder where he was and how many stops he made before he reached Missy’s loving home,” she said. “He must have known she would take care of him. It takes a special person to take the time to reunite a beloved pet with their family. Missy and her family went above and beyond to reunite us with Grayson.”
Why pet tech matters during the holidays
This season brings joy but also risks for pets. Visitors, travel and loud celebrations create more chances for animals to slip out or feel spooked. Tools like AI photo matching help families act fast when a pet goes missing. Love Lost connects shelters and neighbors in one place so that people like Pam and Missy can find each other.
What to do if your pet goes missing
Losing a pet can feel overwhelming, but taking fast action helps. These steps guide you through what to do right away.
1) Search your home and neighborhood right away
Look in closets, garages and under furniture. Walk your street and ask neighbors to check yards and sheds.
2) Upload your pet’s photo to Petco Love Lost
Take a clear photo and post it on the site. AI photo matching alerts you when a possible match appears. It also helps others contact you fast.
3) Visit your local shelters in person
Shelters update kennels throughout the day. Staff can guide you and help flag your pet’s profile. Go often until you get updates.
4) Post on local community groups
Use neighborhood apps, local Facebook groups and community forums. Include your pet’s photo, last known location and your contact info.
5) Put up flyers right away
Use a large photo and simple details. Place flyers at busy intersections and near schools, parks and businesses.
6) Contact your pet’s microchip registry
If your pet is microchipped, call the registry or log in to your account. Make sure the chip is registered to you, update your contact info and mark your pet as missing so shelters and vets can reach you fast.
7) Stay consistent with your search
Check Love Lost alerts often. Visit shelters and follow up on every lead. Persistence made the difference for Pam and Grayson.
LOST DOGS ON FOURTH OF JULY: HOW TO KEEP YOUR PET SAFE
A pet owner is seen cradling a cat on their lap. (Diego Herrera Carcedo/Anadolu via Getty Images)
How AirTags can help you find a lost pet faster
While tools like AI photo matching are invaluable after a pet goes missing, prevention and real-time tracking can make an enormous difference during the first critical hours. That’s where Apple AirTags come in. An AirTag isn’t a GPS tracker, but it can still be a powerful recovery tool when used correctly. When attached securely to your pet’s collar, an AirTag uses Apple’s vast Find My network. That network consists of hundreds of millions of nearby iPhones, iPads and Macs that can anonymously and securely relay the AirTag’s location back to you.
If your pet wanders into a neighborhood, apartment complex or busy area, the chances are high that another Apple device will pass nearby and update the location automatically. You won’t know who helped, and they won’t know it was them, but the location can show up on your map within minutes. For indoor cats or dogs that don’t usually roam far, this can be especially helpful. Even a rough location can narrow your search area and save precious time.
Important limits to know: AirTags work best in populated areas. They rely on nearby Apple devices, so coverage may be limited in rural or remote locations. They also don’t update continuously like true GPS pet trackers. That’s why AirTags should be seen as a backup layer, not a replacement for microchipping or dedicated pet trackers.
How to use an AirTag safely with pets
- Use a secure, pet-specific AirTag holder that won’t break easily.
- Attach it to a breakaway collar for cats and dogs to reduce injury risk.
- Make sure Find My notifications are turned on so you get alerts quickly.
- Combine it with microchipping and ID tags for the best protection.
Used together, these tools give you multiple ways to reconnect with your pet, whether minutes or months have passed.
For a list of the best pet trackers, go to Cyberguy.com and search “best pet trackers.”
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
Grayson’s reunion is a reminder that tech works best when caring people put it to use. AI matched the photos, but Missy took action, and Pam never stopped looking. Their persistence helped a senior cat get home after a long and risky journey.
If your pet went missing today, would you know the first step to bring them home fast? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
TikTok ban: all the news on the app’s shutdown and return in the US
After briefly going dark in the US to comply with the divest-or-ban law targeting ByteDance that went into effect on January 19th, TikTok quickly came back online. It eventually reappeared in the App Store and Google Play as negotiations between the US and China continued, and Donald Trump continued to sign extensions directing officials not to apply the law’s penalties.
Finally, in mid-December, TikTok CEO Shou Zi Chew told employees that the agreements to create TikTok USDS Joint Venture LLC, which includes Oracle, Silver Lake, and MGX as part owners, have been signed, and the deal is expected to close on January 22nd, 2026. His letter said that for users in the US, the new joint venture will oversee data protection, the security of a newly-retrained algorithm, content moderation, and the deployment of the US app and platform.
Read on for all the latest news on the TikTok ban law in the US.
Technology
Secret phrases to get you past AI bot customer service
NEWYou can now listen to Fox News articles!
You’re gonna love me for this.
Say you’re calling customer service because you need help. Maybe your bill is wrong, your service is down or you want a refund. Instead of a person, a cheerful AI voice answers and drops you into an endless loop of menus and misunderstood prompts. Now what?
That’s not an accident. Many companies use what insiders call “frustration AI.” The system is specifically designed to exhaust you until you hang up and walk away.
Not today. (Get more tips like this at GetKim.com)
FOX NEWS POLL: VOTERS SAY GO SLOW ON AI DEVELOPMENT — BUT DON’T KNOW WHO SHOULD STEER
Here are a few ways to bypass “frustration” AI bots. (Sebastian Kahnert/picture alliance via Getty Images)
Use the magic words
You want a human. For starters, don’t explain your issue. That’s the trap. You need words the AI has been programmed to treat differently.
Nuclear phrases: When the AI bot asks why you’re calling, say, “I need to cancel my service” or “I am returning a call.” The word cancel sets off alarms and often sends you straight to the customer retention team. Saying you’re returning a call signals an existing issue the bot cannot track. I used that last weekend when my internet went down, and, bam, I had a human.
Power words: When the system starts listing options, clearly say one word: “Supervisor.” If that doesn’t work, say, “I need to file a formal complaint.” Most systems are not programmed to deal with complaints or supervisors. They escalate fast.
Technical bypass: Asked to enter your account number? Press the pound key (#) instead of numbers. Many older systems treat unexpected input as an error and default to a human.
OPENAI ANNOUNCES UPGRADES FOR CHATGPT IMAGES WITH ‘4X FASTER GENERATION SPEED’
“Supervisor” is one magic word that can get you a human on the other end of the line. (Neil Godwin/Future via Getty Images)
Go above the bots
If direct commands fail with AI, be a confused human.
The Frustration Act: When the AI bot asks a question, pause. Wait 10 seconds before answering. These systems are built for fast, clean responses. Long pauses often break the flow and send your call to a human.
The Unintelligible Bypass: Stuck in a loop? Act like your phone connection is terrible. Say garbled words or nonsense. After the system says, “I’m having trouble understanding you” three times, many bots automatically transfer you to a live agent.
The Language Barrier Trick: If the company offers multiple languages, choose one that’s not your primary language or does not match your accent. The AI often gives up quickly and routes you to a human trained to handle language issues.
Use these tricks when you need help. You are calling for service, not an AI bot.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Long pauses and garbled language can also get you referred to a human. (iStock)
Get tech-smarter on your schedule
- National radio: Airing on 500-plus stations across the U.S. Find yours or get the free podcast.
- Daily newsletter: Join 650,000 people who read the Current (free!)
- Watch: On Kim’s YouTube channel
Award-winning host Kim Komando is your secret weapon for navigating tech.
Copyright 2026, WestStar Multimedia Entertainment. All rights reserved.
-
Iowa4 days agoAddy Brown motivated to step up in Audi Crooks’ absence vs. UNI
-
Washington1 week agoLIVE UPDATES: Mudslide, road closures across Western Washington
-
Iowa6 days agoHow much snow did Iowa get? See Iowa’s latest snowfall totals
-
Maine3 days agoElementary-aged student killed in school bus crash in southern Maine
-
Maryland4 days agoFrigid temperatures to start the week in Maryland
-
Technology1 week agoThe Game Awards are losing their luster
-
South Dakota5 days agoNature: Snow in South Dakota
-
Nebraska1 week agoNebraska lands commitment from DL Jayden Travers adding to early Top 5 recruiting class