Connect with us

Technology

AI cybersecurity risks and deepfake scams on the rise

Published

on

AI cybersecurity risks and deepfake scams on the rise

Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it’s not really them. It’s a deepfake, powered by AI, and you’re the target of a sophisticated scam. These kinds of attacks are happening right now, and they’re getting more convincing every day.

That’s the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference (RSAC), one of the world’s biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.

From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that’s touching more lives than ever before.

Join The FREE CyberGuy Report: Get my expert tech tips, critical security alerts, and exclusive deals – plus instant access to my free Ultimate Scam Survival Guide when you sign up!

Illustration of cybersecurity risks. (Kurt “CyberGuy” Knutsson)

Advertisement

AI tools are leaking sensitive data

One of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.

This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.

Deepfake scams are now real-time and multilingual

AI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.

Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders.

Illustration of a person video conferencing on their laptop. (Kurt “CyberGuy” Knutsson)

Advertisement

AI is running phishing and scam operations at scale

Social engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.

Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around $500 per month, making it widely accessible to bad actors.

Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.

AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like “Time is running out” might be reworded as “The hourglass is nearly empty for you,” making the message feel more personal and urgent while also avoiding detection.

By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. 

Advertisement

Stolen AI accounts are sold on the dark web

With AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation.

WHAT TO DO IF YOUR PERSONAL INFORMATION IS ON THE DARK WEB

Illustration of a person signing into their laptop. (Kurt “CyberGuy” Knutsson)

MALWARE STEALS BANK CARDS AND PASSWORDS FROM MILLIONS OF DEVICES

Jailbreaking AI is now a common tactic

Criminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:

Advertisement
  • Telling the AI to pretend it is a fictional character that has no rules or limitations
  • Phrasing dangerous questions as academic or research-related scenarios
  • Asking for technical instructions using less obvious wording so the request doesn’t get flagged

Some AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.

AI-generated malware is entering the mainstream

AI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..

Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for “text recognition” to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.

Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.

Get a free scan to find out if your personal information is already out on the web 

Poisoned AI models are spreading misinformation

Sometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:

Advertisement
  • Training poisoning: Attackers sneak false or harmful data into the model during development
  • Retrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answers

In 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.

A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time.

Illustration of a hacker at work (Kurt “CyberGuy” Knutsson)

HOW SCAMMERS USE AI TOOLS TO FILE PERFECT-LOOKING TAX RETURNS IN YOUR NAME

How to protect yourself from AI-driven cyber threats

AI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:

1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.

Advertisement

2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.

3) Turn on two-factor authentication (2FA): 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.

4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.

5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap – and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 

Advertisement

6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.

7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials and financial information are now being traded and exploited at scale by cybercriminals using AI.

8) Use a secure password manager: Stolen AI accounts and credential stuffing attacks are a growing threat, with hackers using automated tools to break into accounts and sell access on the dark web. A secure password manager helps you create and store strong, unique passwords for every account, making it far more difficult for attackers to compromise your logins, even if some of your information is leaked or targeted by AI-driven attacks. Get more details about my best expert-reviewed Password Managers of 2025 here.

9) Keep your software updated: AI-generated malware and advanced phishing kits are designed to exploit vulnerabilities in outdated software. To stay ahead of these evolving threats, ensure all your devices, browsers, and applications are updated with the latest security patches. Regular updates close security gaps that AI-powered malware and cybercriminals are actively seeking to exploit. 

Kurt’s key takeaways

Cybercriminals are now using AI to power some of the most convincing and scalable attacks we’ve ever seen. From deepfake video calls and AI-generated phishing emails to stolen AI accounts and malware written by chatbots, these scams are becoming harder to detect and easier to launch. Attackers are even poisoning AI models with false information and creating fake tools that look legitimate but are designed to do harm. To stay safe, it’s more important than ever to use strong antivirus protection, enable multi-factor authentication, and avoid sharing sensitive data with AI tools you do not fully trust.

Advertisement

Have you noticed AI scams getting more convincing? Let us know your experience or questions by writing us at Cyberguy.com/Contact. Your story could help someone else stay safe.

For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

Ask Kurt a question or let us know what stories you’d like us to cover

Follow Kurt on his social channels

Answers to the most asked CyberGuy questions:

Advertisement

New from Kurt:

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement

Technology

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

Published

on

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.

Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.

Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.

The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.

Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.

Advertisement

As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.

Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.

In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.

America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Advertisement
Continue Reading

Technology

What Trump’s ‘ratepayer protection pledge’ means for you

Published

on

What Trump’s ‘ratepayer protection pledge’ means for you

NEWYou can now listen to Fox News articles!

When you open a chatbot, stream a show or back up photos to the cloud, you are tapping into a vast network of data centers. These facilities power artificial intelligence, search engines and online services we use every day. Now there is a growing debate over who should pay for the electricity those data centers consume.

During President Trump’s State of the Union address this week, he introduced a new initiative called the “ratepayer protection pledge” to shift AI-driven electricity costs away from consumers. The core idea is simple. 

Tech companies that run energy-intensive AI data centers should cover the cost of the extra electricity they require rather than passing those costs on to everyday customers through higher utility rates.

It sounds simple. The hard part is what happens next.

Advertisement

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

At the State of the Union address Feb. 24, 2026, President Trump unveiled the “ratepayer protection pledge” aimed at shielding consumers from rising electricity costs tied to AI data centers. (Nathan Posner/Anadolu via Getty Images)

Why AI is driving a surge in electricity demand

AI systems require enormous computing power. That computing power requires enormous electricity. Today’s data centers can consume as much power as a small city. As AI tools expand across business, healthcare, finance and consumer apps, energy demand has risen sharply in certain regions.

Utilities have warned that the current grid in many parts of the country was not built for this level of concentrated demand. Upgrading substations, transmission lines and generation capacity costs money. Traditionally, those costs can influence rates paid by homes and small businesses. That is where the pledge comes in.

What the ratepayer protection pledge is designed to do

Under the ratepayer protection pledge, large technology companies would:

Advertisement
  • Cover the full cost of additional electricity tied to their data centers
  • Build their own on-site power generation to reduce strain on the public grid

Supporters say this approach separates residential energy costs from large-scale AI expansion. In other words, your household bill should not rise simply because a new AI data center opens nearby. So far, Anthropic is the clearest public backer. CyberGuy reached out to Anthropic for a comment on its role in the pledge. A company spokesperson referred us to a tweet from Anthropic Head of External Affairs Sarah Heck.

“American families shouldn’t pick up the tab for AI,” Heck wrote in a post on X. “In support of the White House ratepayer protection pledge, Anthropic has committed to covering 100% of electricity price increases that consumers face from our data centers.”

That makes Anthropic one of the first major AI companies to publicly state it will absorb consumer electricity price increases tied to its data center operations. Other major firms may be close behind. The White House reportedly plans to host Microsoft, Meta and Anthropic in early March to discuss formalizing a broader deal, though attendance and final terms have not been confirmed publicly.

Microsoft also expressed support for the initiative. 

“The ratepayer protection pledge is an important step,” Brad Smith, Microsoft vice chair and president, said in a statement to CyberGuy. “We appreciate the administration’s work to ensure that data centers don’t contribute to higher electricity prices for consumers.”  

Industry groups also point to companies such as Google and utilities including Duke Energy and Georgia Power as making consumer-focused commitments tied to data center growth. However, enforcement mechanisms and long-term regulatory details remain unclear.

Advertisement

CHINA VS SPACEX IN RACE FOR SPACE AI DATA CENTERS

The White House plans talks with Microsoft, Meta and Anthropic about shifting AI energy costs away from consumers. (Eli Hiller/For The Washington Post via Getty Images)

How this could change the economics of AI

AI infrastructure is already one of the most expensive technology buildouts in history. Companies are investing billions in chips, servers and real estate. If firms must also finance dedicated power plants or pay premium rates for grid upgrades, the cost of running AI systems increases further. That could lead to:

  • Slower expansion in some markets
  • Greater investment in renewable energy and storage
  • More partnerships between tech firms and utilities

Energy strategy may become just as important as computing strategy. For consumers, this shift signals that electricity is now a central part of the AI conversation. AI is no longer only about software. It is also about infrastructure.

The bigger consumer tech picture

AI is becoming embedded in smartphones, search engines, office software and home devices. As adoption grows, so does the hidden infrastructure supporting it. Energy is now part of the conversation around everyday technology. Every AI-generated image, voice command or cloud backup depends on a power-hungry network of servers.

By asking companies to account more directly for their electricity use, policymakers are acknowledging a new reality. The digital world runs on very physical resources. For you, that shift could mean more transparency. It also raises new questions about sustainability, local impact and long-term costs.

Advertisement

ARTIFICIAL INTELLIGENCE HELPS FUEL NEW ENERGY SOURCES

As AI expansion strains the grid, a new proposal would require tech firms to fund their own power needs. (Sameer Al-Doumy/AFP via Getty Images)

What this means for you

If you are a homeowner or renter, the practical question is simple. Will this protect my electric bill? In theory, separating data center energy costs from residential rates could reduce the risk of price spikes tied to AI growth. If companies fund their own generation or grid upgrades, utilities may have less reason to spread those costs among all customers.

That said, utility pricing is complex. It depends on state regulators, long-term planning and local energy markets.

Here is what you can watch for in your area:

Advertisement
  • New data center construction announcements
  • Utility filings that mention large commercial load growth
  • Public service commission decisions on rate adjustments

Even if you rarely use AI tools, your community could feel the effects of a nearby data center. The pledge is intended to keep those large-scale power demands from showing up in your monthly bill.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

The ratepayer protection pledge highlights an important turning point. AI is no longer only about innovation and speed. It is also about energy and accountability. If tech companies truly absorb the cost of their expanding power needs, households may avoid some of the financial strain tied to rapid AI growth. If not, utility bills could become an unexpected front line in the AI era.

As AI tools become part of daily life, how much extra power are you willing to support to keep them running? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com. All rights reserved.

Related Article

Scoop: Trump brings Big Tech to White House to curb power costs amid AI boom
Advertisement
Continue Reading

Technology

Here’s your first look at Kratos in Amazon’s God of War show

Published

on

Here’s your first look at Kratos in Amazon’s God of War show

Amazon has slowly been teasing out casting details for its live-action adaptation of God of War, and now we have our first look at the show. It’s a single image but a notable one showing protagonist Kratos and his son Atreus. The characters are played by Ryan Hurst and Callum Vinson, respectively, and they look relatively close to their video game counterparts.

There aren’t a lot of other details about the show just yet, but this is Amazon’s official description:

The God of War series storyline follows father and son Kratos and Atreus as they embark on a journey to spread the ashes of their wife and mother, Faye. Through their adventures, Kratos tries to teach his son to be a better god, while Atreus tries to teach his father how to be a better human.

That sounds a lot like the recent soft reboot of the franchise, which started with 2018’s God of War and continued through Ragnarök in 2022. For the Amazon series, Ronald D. Moore, best-known for his work on For All Mankind and Battlestar Galactica, will serve as showrunner. The rest of the cast includes: Mandy Patinkin (Odin), Ed Skrein (Baldur), Max Parker (Heimdall), Ólafur Darri Ólafsson (Thor), Teresa Palmer (Sif), Alastair Duncan (Mimir), Jeff Gulka (Sindri), and Danny Woodburn (Brok).

While production is underway on the God of War series, there’s no word on when it might start streaming.

Advertisement
Continue Reading

Trending