Connect with us

Technology

How to spot and stop AI phishing scams

Published

on

How to spot and stop AI phishing scams

NEWYou can now listen to Fox News articles!

Artificial intelligence can do a lot for us. Need to draft an email? AI has you covered. Looking for a better job? AI can help with that, too. It can even boost our health and fitness. Some tools, like AI-powered exoskeletons, can lighten heavy loads and improve performance. 

Advertisement

But it’s not all sunshine and progress. Hackers are also turning to AI, and they’re using it to make phishing scams smarter and harder to spot. These scams are designed to trick people into handing over personal details or money. One woman recently lost $850,000 after a scammer, posing as Brad Pitt with the help of AI, convinced her to send money. Scary, right? 

The good news is that you can learn to recognize the warning signs. Before we dive into how to protect yourself, let’s break down what AI phishing scams really are.

HOW AI BROWSERS OPEN THE DOOR TO NEW SCAMS

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CyberGuy.com newsletter.

A single click on a fake link could expose your personal information. (Kurt “CyberGuy” Knutsson)

Advertisement

What are AI phishing scams?

AI phishing scams are when hackers use AI to make their scams more convincing. AI helps them create super-realistic emails, messages, voices and even videos. This makes it harder for people to tell what’s real and what’s fake. Old-school phishing emails were easy to spot because of typos and bad grammar. However, thanks to AI tools like ChatGPT, hackers can now create flawless, professional-sounding emails that are much harder to detect. AI-generated phishing emails aren’t the only threat. Hackers are also using AI to pull off scams like:

  • Voice clone scams: They use AI to copy the voice of someone you know, like a friend or family member, to trick you.
  • Deepfake video scams: They create super-realistic videos of someone you trust, like a loved one or a celebrity, to manipulate you.

Here’s how you can spot these AI-driven scams before they fool you.

1) Spot common phishing email red flags

Though hackers can use AI tools to write grammatically perfect email copy, AI phishing emails still have some classical red flags. Here are some telltale signs that it is an AI-driven phishing email:

  • Suspicious sender’s address that doesn’t match the company’s domain.
  • Generic greetings like “Dear Customer” instead of your name.
  • Urgent requests pressuring you to act immediately.
  • Unsolicited attachments and links requiring you to take action

The biggest red flag is the sender’s email address. There is often a slight change in the spelling of the email address, or it is an entirely different domain name. For example, a hacker might use an email like xyz@PayPall.com or a personal address from Gmail.com, such as the email below, or Outlook.com while pretending to be from PayPal.

Hackers are using AI to create scams that look frighteningly real. (Kurt “CyberGuy” Knutsson)

2) Analyze the language for AI-generated patterns

It used to be easier to spot phishing emails by noticing silly typos. Thanks to AI, hackers can now craft flawless emails. But you can still sense a phishing email if you analyze the language of the email body copy carefully. The most prominent sign of AI-generated email copy is that it looks highly formal with a dash of failed attempts to be personal. You might not notice it at first, but looking at it closely is likely to give a red flag. The language of such emails is often robotic.

3) Watch for AI voice clone scam warning signs 

With AI, it is possible to clone voices. So, there is no surprise that there is a steep rise in voice phishing, which is also known as vishing. Recently, a father lost $4 billion in Bitcoin to vishing. Though AI voice cloning has improved, it’s still flawed. You can spot inconsistencies by verifying the speaker’s identity. Ask specific questions that only the real person would know. This can reveal gaps in the scammer’s script. The voice, also, at times may sound robotic due to imperfections in voice cloning technology. So the next time, whenever you receive a call that creates a sense of urgency, ask as many questions as you can to verify the identity of the person. You may also consider verifying the claims through the second channel. If the person on the other side of the phone says something, you can get it confirmed by the official email to be on the safer side.

Advertisement

GOOGLE AI EMAIL SUMMARIES CAN BE HACKED TO HIDE PHISHING ATTACKS

4) Identify visual glitches and oddities in video calls

Deepfake videos are getting pretty convincing, but they’re not flawless yet. They have visual inconsistencies and oddities, which can make the voice or video appear fake. So watch the video carefully and try to catch the signs of unnatural eye movements, lip-sync issues, weird lighting, shadows and voice inconsistencies. You can also use a deepfake video detection tool to spot a fake video.

5) Set up and use a shared secret

A shared secret is something only you and your loved ones know. If someone claiming to be a friend or family member contacts you, ask for the shared secret. If they can’t answer, you’ll know it’s a scam.

Hackers are turning to artificial intelligence to make phishing scams smarter and harder to spot. (miniseries/Getty Images)

How to protect yourself from AI phishing scams

AI phishing scams rely on tricking people into trusting what looks and sounds real. By staying alert and practicing safe habits, you can lower your risk. Here’s how to stay ahead of scammers:

Advertisement

1) Stay cautious with unsolicited messages

Never trust unexpected emails, texts or calls that ask for money, personal details or account access. Scammers use urgency to pressure you into acting fast. Slow down and double-check before clicking or responding. If something feels off, it probably is.

2) Use a data removal service

Protect your devices with a trusted data removal service to reduce the amount of personal info exposed online. Fewer exposed details make it harder for scammers to target you. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting CyberGuy.com.

Get a free scan to find out if your personal information is already out on the web: CyberGuy.com.

3) Check links before you click and install strong antivirus software

Hackers often hide malicious links behind convincing text. Hover your cursor over a link to see the actual URL before you click. If the address looks odd, misspelled or unrelated to the company, skip it. Clicking blindly can download malware or expose your login details. Also, install strong antivirus software on all of your devices that blocks phishing links and scans for malware. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Advertisement

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at CyberGuy.com.

4) Turn on two-factor authentication

Even if a scammer steals your password, two-factor authentication (2FA) can keep them locked out. Enable 2FA on your email, banking and social media accounts. Choose app-based codes or a hardware key over text messages for stronger protection.

5) Limit what you share online

The more personal details you share, the easier it is for hackers to make AI scams believable. Avoid posting sensitive information like travel plans, birthdays or financial updates on social media. Scammers piece these details together to build convincing attacks.

6) Verify requests through another channel

If you get a message asking for money or urgent action, confirm it in another way. Call the person directly using a number you know, or reach out through official company channels. Don’t rely on the same email, text or call that raised suspicion in the first place.

Advertisement

Kurt’s key takeaways 

AI is making scams more convincing and harder to detect, but you can stay ahead by recognizing the warning signs. You should watch out for suspicious email addresses, unnatural language, robotic voices and visual glitches in videos, and always verify information through a second channel. You should also establish a shared secret with loved ones to protect yourself from AI-driven voice and video scams.

Have you experienced any AI-driven phishing scams yet, and what do you think is the best way to spot such a scam? Let us know by writing to us at CyberGuy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CyberGuy.com newsletter.

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Two more xAI co-founders are among those leaving after the SpaceX merger

Published

on

Two more xAI co-founders are among those leaving after the SpaceX merger

Since the xAI-SpaceX merger announced last week, which combined the two companies (as well as social media platform X) for a reported $1.25 trillion valuation — the biggest merger of all time — a handful of xAI employees and two of its co-founders have abruptly exited the company, penning long departure announcements online. Some also announced that they were starting their own AI companies.

Co-founder Yuhai (Tony) Wu announced his departure on X, writing that it was “time for [his] next chapter.” Jimmy Ba, another co-founder, posted something similar later that day, saying it was “time to recalibrate [his] gradient on the big picture.” The departures mean that xAI is now left with only half of its original 12 co-founders on staff.

It all comes after changing plans for the future of the combined companies, which Elon Musk recently announced would involve “space-based AI” data centers and vertical integration involving “AI, rockets, space-based internet, direct-to-mobile device communications and the world’s foremost real-time information and free speech platform.” Musk reportedly also talked of plans to build an AI satellite factory and city on the moon in an internal xAI meeting.

Musk wrote on X Wednesday that “xAI was reorganized a few days ago to improve speed of execution” and claimed that the process “unfortunately required parting ways with some people,” then put out a call for more people to apply to the company. He also posted a recording of xAI’s 45-minute internal all-hands meeting that announced the changes.

“We’re organizing the company to be more effective at this scale,” Musk said during the meeting. He added that the company will now be organized in four main application areas: Grok Main and Voice, Coding, Imagine (image and video), and Macrohard (“which is intended to do full digital emulation of entire companies,” Musk said).

Advertisement
Continue Reading

Technology

2026 Valentine’s romance scams and how to avoid them

Published

on

2026 Valentine’s romance scams and how to avoid them

NEWYou can now listen to Fox News articles!

Valentine’s Day should be about connection. However, every February also becomes the busiest season of the year for romance scammers. In 2026, that risk is higher than ever.

These scams are no longer simple “lonely hearts” schemes. Instead, modern romance fraud relies on artificial intelligence, data brokers and stolen personal profiles. Rather than sending random messages and hoping for a response, scammers carefully select victims using detailed personal data. From there, they use AI to impersonate real people, create convincing conversations and build trust at scale.

As a result, if you are divorced, widowed or returning to online dating after the holidays, this is often the exact moment scammers target you.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

WHEN DATING APPS GET HACKED, YOUR PRIVATE LIFE GOES PUBLIC

Romance scams surge around Valentine’s Day as criminals use artificial intelligence and stolen data to target widowed, divorced and older adults returning to online dating. (Omar Karim/Middle East Images/AFP via Getty Images)

The new face of romance scams in 2026

Romance scams are no longer slow, one-on-one cons. They’re now high-tech operations designed to target hundreds of people at once. Here’s what’s changed:

1) AI-generated personas that look and sound real

In the past, fake profiles used stolen photos and broken English. Today, scammers use AI-generated faces, voices and videos that don’t belong to any real person, making them almost impossible to reverse search.

You may be interacting with a profile that:

Advertisement
  • Has years of realistic-looking social media posts
  • Shares daily photos that match the story they tell
  • Sends customized voice notes that sound natural
  • Appears on “video calls” using AI face-mapping software.

Some scam networks even create entire fake families and friend groups online, so the person appears to have a real life, real friends and real history. To the victim, it feels like a genuine connection because the “person” behaves like one in every way.

2) Automated relationship scripts that adapt to you

Behind the scenes, many scammers now use software platforms that manage dozens of conversations at once. This is known as “scamware” and is incredibly hard to flag.

These systems:

  • Track your replies
  • Flag emotional triggers (grief, loneliness, fear, trust)
  • Suggest responses based on your mood and history.

When you mention that you are widowed, the tone quickly becomes more comforting. Meanwhile, if you say you are financially stable, the story shifts toward so-called “business opportunities.” And if you hesitate, the system responds by introducing urgency or guilt. It feels personal, but in reality, you’re being guided through a pre-written emotional funnel designed to lead to one outcome: money.

3) Crypto and “investment romance” scams

One of the fastest-growing versions of romance fraud now blends love and money. A BBC World Service investigation recently revealed that many romance scams are now run by organized criminal networks across Southeast Asia, using what insiders call the “pig butchering” model, where victims are slowly “fattened up” with trust before being financially destroyed.

These operations use call center style setups, data broker profiles, scripted conversations and AI tools to target thousands of people at once. This is not accidental fraud. It’s an industry.

And the reason you were selected is simple. Your personal data made you easy to find, easy to profile and easy to target.

Advertisement

After weeks of trust-building, the scammer introduces:

  • A “private” crypto platform
  • A fake trading app
  • A business or investment opportunity, “they use themselves.”

They may show fake dashboards, fake profits and even let you “withdraw” small amounts at first to build trust. But once larger sums are sent, the site disappears and so does the person. There is no investment. There is no account. And there is no way to recover the funds.

AI DEEPFAKE ROMANCE SCAM STEALS WOMAN’S HOME AND LIFE SAVINGS

Data brokers selling personal details fuel a new wave of romance fraud by helping scammers select financially stable, older victims before contact is made. (Jens Büttner/picture alliance via Getty Images)

How scammers find you before you ever match

The biggest misconception is that romance scams begin on dating apps. They don’t. They begin long before that, inside massive databases run by data brokers. These companies collect and sell profiles that include:

  • Your age and marital status
  • Whether you’re widowed or divorced
  • Your home address history
  • Your phone number and email
  • Your family members and relatives
  • Your income range and retirement status.

Scammers buy this data to build shortlists of ideal victims.

The data brokers behind romance scams

They filter for:

Advertisement
  • Age 55-plus
  • Widowed or divorced
  • Living alone
  • Financially stable
  • Not active on social media.

That’s how they know who to target before the first message is ever sent.

Why are widowed and retired adults targeted first?

Scammers aren’t cruel by accident. They target people who are statistically more likely to respond. If you’ve lost a spouse, moved recently or reentered the dating world, your personal data often shows that. That makes you a priority target. And once your name lands on a scammer’s list, it can be sold again and again. That’s why many victims say, “I blocked them, but new ones keep showing up.” It’s not a coincidence. It’s data recycling.

How the scam usually unfolds

Most romance scams follow the same pattern:

  • Friendly introduction: A warm message. No pressure. Often references something personal about you.
  • Fast emotional bonding: They mirror your values, your experiences, even your grief.
  • Distance and excuses: They can’t meet. There’s always a reason: military deployment, overseas job, business travel.
  • A sudden “crisis”: Medical bills, business losses, frozen accounts, investment opportunities.
  • Money requests: Wire transfers, gift cards, crypto or “temporary help.”

By the time money is involved, the emotional connection is already strong. Many victims send thousands before realizing it’s a scam.

The Valentine’s Day cleanup that stops scams at the source

If you want fewer scam messages this year, you need to remove your personal information from the places scammers buy it. That’s where a data removal service comes in. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. 

These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Advertisement

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

Practical steps to protect yourself this February

Here’s what you can do right now:

  • Never send money to someone you haven’t met in person
  • Be skeptical of fast emotional bonding
  • Verify profiles with reverse image searches
  • Don’t share personal details early
  • Remove your data from broker sites.
  • Use strong antivirus software to block malicious links and fake login pages. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

When you combine these steps, you remove the access, urgency and leverage scammers rely on.

SUPER BOWL SCAMS SURGE IN FEBRUARY AND TARGET YOUR DATA

Cybercriminals now deploy AI-generated faces, voices and scripted conversations to impersonate real people and build trust at scale in modern romance scams. (Martin Bertrand/Hans Lucas/AFP via Getty Images)

Kurt’s key takeaways

Romance scams are no longer random. They are targeted, data-driven and emotionally engineered. This Valentine’s Day, the best gift you can give yourself is privacy. By removing your personal data from broker databases, you make it harder for scammers to find you, profile you and exploit your trust. And that’s how you protect not just your heart, but your identity, your savings and your peace of mind.

Advertisement

Have you or someone you love been contacted by a Valentine’s Day romance scam that felt real or unsettling?  Let us know your thoughts by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

Uber Eats adds AI assistant to help with grocery shopping

Published

on

Uber Eats adds AI assistant to help with grocery shopping

Uber announced a new AI feature called “Cart Assistant” for grocery shopping in its Uber Eats app.

The new feature works a couple different ways. You can use text prompts, as you would with any other AI chatbot, to ask it to build a grocery list for you. Or you can upload a picture of your shopping list and ask it to populate your cart with all your favorite items, based on your order history. You can be as generic as you — “milk, eggs, cereal” — and the bot will make a list with all your preferred brands.

And that’s just to start out. Uber says in the coming months, Cart Assistant will add more features, including “full recipe inspiration, meal plans, and the ability to ask follow up questions, and expand to retail partners.”

But like all chatbots, Uber acknowledges that Cart Assistant may make mistakes, and urges users to double-check and confirm the results before placing any orders.

It will also only work at certain grocery stores, with Uber announcing interoperability at launch with Albertsons, Aldi, CVS, Kroger, Safeway, Sprouts, Safeway, Walgreen, and Wegmans. More stores will be added in the future, the company says.

Advertisement

Uber has a partnership with OpenAI to integrate Uber Eats into its own suite of apps. But Uber spokesperson Richard Foord declined to say whether the AI company’s technology was powering the new chatbot in Uber Eats. “Cart Assistant draws on publicly available LLM models as well as Uber’s own AI stack,” Foord said in an email.

Uber has been racing to add more AI-driven features to its apps, including robotaxis with Waymo and sidewalk delivery robots in several cities. The company also recently revived its AI Labs to collaborate with its partners on building better products using delivery and customer data.

Continue Reading

Trending