Connect with us

Technology

AI cybersecurity risks and deepfake scams on the rise

Published

on

AI cybersecurity risks and deepfake scams on the rise

Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it’s not really them. It’s a deepfake, powered by AI, and you’re the target of a sophisticated scam. These kinds of attacks are happening right now, and they’re getting more convincing every day.

That’s the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference (RSAC), one of the world’s biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.

From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that’s touching more lives than ever before.

Join The FREE CyberGuy Report: Get my expert tech tips, critical security alerts, and exclusive deals – plus instant access to my free Ultimate Scam Survival Guide when you sign up!

Illustration of cybersecurity risks. (Kurt “CyberGuy” Knutsson)

Advertisement

AI tools are leaking sensitive data

One of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.

This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.

Deepfake scams are now real-time and multilingual

AI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.

Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders.

Illustration of a person video conferencing on their laptop. (Kurt “CyberGuy” Knutsson)

Advertisement

AI is running phishing and scam operations at scale

Social engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.

Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around $500 per month, making it widely accessible to bad actors.

Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.

AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like “Time is running out” might be reworded as “The hourglass is nearly empty for you,” making the message feel more personal and urgent while also avoiding detection.

By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. 

Advertisement

Stolen AI accounts are sold on the dark web

With AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation.

WHAT TO DO IF YOUR PERSONAL INFORMATION IS ON THE DARK WEB

Illustration of a person signing into their laptop. (Kurt “CyberGuy” Knutsson)

MALWARE STEALS BANK CARDS AND PASSWORDS FROM MILLIONS OF DEVICES

Jailbreaking AI is now a common tactic

Criminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:

Advertisement
  • Telling the AI to pretend it is a fictional character that has no rules or limitations
  • Phrasing dangerous questions as academic or research-related scenarios
  • Asking for technical instructions using less obvious wording so the request doesn’t get flagged

Some AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.

AI-generated malware is entering the mainstream

AI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..

Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for “text recognition” to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.

Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.

Get a free scan to find out if your personal information is already out on the web 

Poisoned AI models are spreading misinformation

Sometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:

Advertisement
  • Training poisoning: Attackers sneak false or harmful data into the model during development
  • Retrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answers

In 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.

A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time.

Illustration of a hacker at work (Kurt “CyberGuy” Knutsson)

HOW SCAMMERS USE AI TOOLS TO FILE PERFECT-LOOKING TAX RETURNS IN YOUR NAME

How to protect yourself from AI-driven cyber threats

AI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:

1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.

Advertisement

2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.

3) Turn on two-factor authentication (2FA): 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.

4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.

5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap – and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 

Advertisement

6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.

7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI.

8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here.

9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit. 

Kurt’s key takeaways

Cybercriminals are now using AI to power some of the most convincing and scalable attacks we’ve ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it’s more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust.

Advertisement

Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Cyberguy.com/Contact. Your story could help someone else stay safe.

For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

Ask Kurt a question or let us know what stories you’d like us to cover

Follow Kurt on his social channels

Answers to the most asked CyberGuy questions:

Advertisement

New from Kurt:

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement

Technology

The first Dolby FlexConnect soundbar is coming from LG

Published

on

The first Dolby FlexConnect soundbar is coming from LG

Dolby Atmos FlexConnect technology debuted this year with the TCL Z100 speakers, and now we’re getting our first FlexConnect soundbar thanks to LG. The new H7 soundbar — which runs on the same Alpha 11 Gen 3 chip as LG’s OLEDs and new Micro RGB LED — is a part of the LG Sound Suite, a modular home audio system the company will debut at CES 2026. In addition to the soundbar, the Sound Suite will include the M5 and M7 surround speakers and the W7 subwoofer. All of the speakers feature Peerless Audio components.

The two main drawbacks of TCL’s Dolby FlexConnect implementation were the limitation of only allowing four connected speakers, including a sub, and the need for a 2025 QM series TCL TV. So you needed to pick between better sound coverage with a fourth speaker or more bass performance with a sub. LG’s Sound Suite, on the other hand, will allow you to connect the soundbar with up to four surround speakers and a subwoofer for a potential 13.1.7-channel system.

And while the speakers can be used with a compatible LG TV (including the 2026 premium LG TV lineup and 2025’s C5 and G5 OLEDs), it isn’t required. It’s possible to use the H7 soundbar with any TV — or without — and have it act as what’s called the lead device to connect the surround speakers and sub. LG says there are 27 different speaker configurations possible, from using two speakers as a stereo pair up to the full system with soundbar, surrounds, and sub.

In my experience with the TCL Z100, calibrating FlexConnect speakers to your space is also fast. Once they’re in place and plugged in, a short musical clip is played for a few seconds and then setup is complete. The system is able to know where the speakers are placed and how to optimize the surround and Atmos sound for your room. With other room correction software, the process can take much longer, requiring taking sound readings from multiple locations in the room.

LG is using ultra-wideband technology to adjust the sweet spot based on your listening position that it’s calling Sound Follow. What will be interesting to see with the LG Sound Suite’s Dolby FlexConnect implementation is how customizable it is after setup (for instance, adjusting subwoofer levels).

Advertisement

I’ll be hearing the system at CES and plan on reviewing the system when it’s available to see how well the technology translates into a home.

Continue Reading

Technology

The fake refund scam: Why scammers love holiday shoppers

Published

on

The fake refund scam: Why scammers love holiday shoppers

NEWYou can now listen to Fox News articles!

The holiday shopping season should feel exciting, but for scammers, it’s rush hour. And this year, one trick is hitting more inboxes and phones than ever: the fake refund scam. If you’ve ever seen an unexpected “Your refund has been issued,” “Your payment failed” or “We owe you money” email or text during November or December, it wasn’t an accident.

Scammers know you’re buying more, tracking more packages and juggling more receipts than at any other time of year. That chaos makes fake refund scams incredibly effective and incredibly dangerous.

Here’s why these scams are spreading, how to spot them instantly and the one thing you can do today to stop scammers from targeting you in the first place.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

FBI WARNS EMAIL USERS AS HOLIDAY SCAMS SURGE

Fake refund emails can look convincing during the holidays, making it easy to fall for a scam when your inbox is overflowing. (Kurt “CyberGuy” Knutsson)

Why refund scams skyrocket during the holidays

Scammers strike when Americans are distracted, rushed and making dozens of purchases. Black Friday, Cyber Monday and holiday gift-buying create the perfect storm:

1) You’re expecting legitimate refunds

Holiday shopping means:

  • Items going out of stock
  • Orders getting canceled
  • Packages arriving late
  • Prices changing
  • Stores offering “Best Price Guarantee” refunds.

Scammers know this. When you’re already expecting refund emails, their fake ones blend right in.

2) You’re spending more, which means bigger targets

A study shows that this year, Americans will spend 3.6% more than the previous year on holiday shopping. A $200 to $500 purchase is completely normal during this season. Other reports show a decrease in spending, but note that people spend, on average, over $600 during the Black Friday promotions alone.

Advertisement

Expenses stack up, new things arrive, some get returned and a “$249 refund issued” message doesn’t look suspicious—it looks plausible. But it’s crucial you check if that message is real. Never click any links without a thorough look at the email address, name and content of the message.

3) Your inbox is overflowing

Have you been eyeing a new home appliance? Or a present for a loved one? Have you saved anything in your cart just to see if the price drops? Thanks to Black Friday, your inbox is probably filled with:

  • Promotional codes
  • Offers
  • Shipping updates
  • Order confirmations
  • Receipts
  • Return notifications.

It’s easy to lose track of your orders and packages amidst the influx of emails. And when you’re skimming more than 200 promotions, scams become harder to catch.

4) They know exactly what you purchased

Scammers get their information from data brokers, companies that collect, package and sell your personal information. Your profile can include anything from your name, contact information, to your purchase history and even your financial situation.

In general, data brokers and shopping apps sell patterns, including:

  • Where you shop
  • How much you spend
  • What categories you buy
  • Recent purchases
  • Your email, phone number and address.

And scammers buy that information to craft compelling and personalized attacks. That’s why their fake refund emails often mimic retailers you actually used.

HOW TO STOP IMPOSTOR BANK SCAMS BEFORE THEY DRAIN YOUR WALLET

Advertisement

Scammers use urgent warnings and realistic details to pressure you into clicking links that steal your personal information. (Kurt “CyberGuy” Knutsson)

How the fake refund scam works

Scammers usually follow one of three playbooks:

“Your refund is ready-verify your account.” You click a link, and you’re taken to what looks like Amazon, Walmart, UPS, Target or Best Buy. And when you enter your login, scammers can steal your credentials by manipulating you.

“We overcharged you. Click here for your refund.” It asks for your debit card number, your bank login and your PayPal credentials. Or worse: it installs malware that steals them automatically.

Phone version: “We issued a refund by mistake.” You get a call from someone pretending to be Amazon customer service, PayPal support, or even your bank. They say they “refunded too much money” and need you to send back the difference. Some even screen-share to drain bank accounts in real time.

Advertisement

These scams cost Americans hundreds of millions of dollars every year. The FTC reports that impostor scams (which are related to online shopping) accounted for the second-highest reported losses, resulting in $2.95 billion being lost in 2024.

What these emails look like so you can spot them fast

Scammers are getting more sophisticated. Fake refund messages often include:

  • Your correct name
  • A real store logo
  • A real order amount
  • A believable order number
  • “Click to view refund” buttons
  • Deadline pressure like “respond within 24 hours.”

Here’s the giveaway: No legitimate retailer requires you to enter banking info to receive a refund, ever.

Note that scams often ask you to:

  • Confirm a payment
  • Verify personal info
  • Log in through a link
  • Provide banking details
  • Download an invoice.

The simplest way to protect yourself before the holiday peak

Deleting your data manually from data broker sites is technically possible, but extremely tedious. Some require government ID uploads, faxed forms, multiple follow-up requests and updates every 30 to 90 days because they relist your data.

This is why most people almost never do it. A data removal service, however, automates the entire process. These services:

  • Identify which broker sites have your info
  • Send official deletion requests on your behalf
  • Force them to remove your data
  • Continually monitor and re-request removals
  • Block brokers from relisting you

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com

Advertisement

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com

Criminals often rely on data from broker sites to personalize refund scams, which is why reducing your digital footprint matters. (Kurt “CyberGuy” Knutsson)

How to protect yourself this season (3 quick steps)

Remember to follow these few simple steps to safeguard yourself against targeted scams.

1) Never click refund links in emails or texts

Go directly to your retailer’s website and check your actual order history. Verify the email address of the sender and only communicate with official representatives of the retailer.

2) Turn on multi-factor authentication

Set up two-factor authentication (2FA) for all of your accounts. With the help of 2FA, you’ll need to authorize logins via email, text message or generated PINs. So, even if you accidentally enter your password somewhere fake, 2FA can stop the breach.

Advertisement

3) Limit how scammers can find you

This is the part most people skip—and it’s why they stay targets. Removing your personal info from data broker sites cuts off scammers’ access to your real details. A data removal service automates and makes the process ongoing, which is why I recommend it to my most privacy-conscious readers.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Kurt’s key takeaways

Refund scams explode during the holiday shopping season because scammers rely on two things: Chaos in your inbox and your personal data being sold behind your back. You can’t stop scammers from sending fake emails, but you can stop them from targeting you specifically. Before peak holiday shopping hits, take a moment to clean up your data trail. You’ll end up with fewer scams, fewer risks and far more peace of mind.

Have you received a suspicious refund email or text this season? Share your experience so we can help warn others in the comments below. Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading

Technology

Rad Power Bikes files for bankruptcy protection

Published

on

Rad Power Bikes files for bankruptcy protection

Rad Power Bikes, the once dominant electric bicycle brand in the US, filed for Chapter 11 bankruptcy protection this week as it seeks to sell of its company. The move comes less than a month after Rad Power said it could not afford to recall its older e-bike batteries that had been designated a fire risk by the US Consumer Protection Safety Commission.

The bankruptcy, which was first reported by Bicycle Retailer, was filed in US Bankruptcy Court for the Eastern District of Washington, near the company’s headquarters in Seattle. Rad Power lists its estimated assets at $32.1 million and estimated liabilities at $72.8 million. Its inventory of e-bikes, spare parts, and accessories is listged at $14.2 million, Bicycle Retailer says.

It’s a stunning reversal for the once leading e-bike company in the US. Mike Radenbaugh founded the company in 2015 after several years of selling custom-made e-bikes to customers on the West Coast. Rad Power quickly grew to over 11 distinct models, including the fat-tire RadRover, the long-tail RadWagon, and the versatile RadRunner. Rad Power Bikes raised an approximate total of $329 million across several funding rounds, primarily in 2021, with major investments from firms like Fidelity, Morgan Stanley, and T. Rowe Price.

But in the wake of the post-covid bike boom, things started to go south. There were supply chain disruptions, safety recalls, several rounds of layoffs, and executive turnover. Last month, Rad Power said it was facing “significant financial challenges” that could lead to its imminent closure without a cash infusion.

The CPSC warning apparently was the nail in the coffin. The company’s older batteries could “unexpectedly ignite and explode,” the agency warned, citing 31 fires, including 12 reports of property damage totaling $734,500. There weren’t any injuries, but the company said it couldn’t afford a costly recall.

Advertisement

Rad Power could still live on if its able to find a buyer for its assets and brand. Dutch e-bike make VanMoof was able to find a buyer following its 2023 bankruptcy. And Belgium’s Cowboy is in talks to be acquired by a French holding company of several bike brands. Rad Power will continue to operate as it restructures its debts under court supervision, and in a statement to Bicycle Retailer said it will continue to sell bikes and work with customers and vendors as it moves forward with the process.

Continue Reading

Trending