Connect with us

Technology

New email scam uses hidden characters to slip past filters

Published

on

New email scam uses hidden characters to slip past filters

NEWYou can now listen to Fox News articles!

Cybercriminals keep finding new angles to get your attention, and email remains one of their favorite tools. Over the years, you have probably seen everything from fake courier notices to AI-generated scams that feel surprisingly polished. Filters have improved, but attackers have learned to adapt. The latest technique takes aim at something you rarely think about: the subject line itself. Researchers have found a method that hides tiny, invisible characters inside the subject so automated systems fail to flag the message. It sounds subtle, but it is quickly becoming a serious problem.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

NEW SCAM SENDS FAKE MICROSOFT 365 LOGIN PAGES

Cybercriminals are using invisible Unicode characters to disguise phishing email subject lines, allowing dangerous scams to slip past filters. (Photo by Donato Fasano/Getty Images)

Advertisement

How the new trick works

Researchers recently uncovered phishing campaigns that embed soft hyphens between every letter of an email subject. These are invisible Unicode characters that normally help with text formatting. They do not show up in your inbox, but they completely throw off keyword-based filters. Attackers use MIME encoded-word formatting to slip these characters into the subject. By encoding it in UTF-8 and Base64, they can weave these hidden characters through the entire phrase.

One analyzed email decoded to “Your Password is About to Expire” with a soft hyphen tucked between every character. To you, it looks normal. To a security filter, it looks scrambled, with no clear keyword to match. The attackers then use the same trick in the body of the email, so both layers slide through detection. The link leads to a fake login page sitting on a compromised domain, designed to harvest your credentials.

If you have ever tried spotting a phishing email, this one still follows the usual script. It builds urgency, claims something is about to expire and points you to a login page. The difference is in how neatly it dodges the filters you trust.

Why this phishing technique is super dangerous

Most phishing filters rely on pattern recognition. They look for suspicious words, common phrases and structure. They also scan for known malicious domains. By splitting every character with invisible symbols, attackers break up these patterns. The text becomes readable for you but unreadable for automated systems. This creates a quiet loophole where old phishing templates suddenly become effective again.

Advertisement

The worrying part is how easy this method is to copy. The tools needed to encode these messages are widely available. Attackers can automate the process and churn out bulk campaigns with little extra effort. Since the characters are invisible in most email clients, even tech-savvy users do not notice anything odd at first glance.

Security researchers point out that this method has appeared in email bodies for years, but using it in the subject line is less common. That makes it harder for existing filters to catch. Subject lines also play a key role in shaping your first impression. If the subject looks familiar and urgent, you are more likely to open the email, which gives the attacker a head start.

How to spot a phishing email before you click

Phishing emails often look legitimate, but the links inside them tell a different story. Scammers hide dangerous URLs behind familiar-looking text, hoping you will click without checking. One safe way to preview a link is by using a private email service that shows the real destination before your browser loads it.

Our top-rated private email provider recommendation includes malicious link protection that reveals full URLs before opening them. This gives you a clear view of where a link leads before anything can harm your device. It also offers strong privacy features like no ads, no tracking, encrypted messages and unlimited disposable aliases.

For recommendations on private and secure email providers, visit Cyberguy.com

Advertisement

PAYROLL SCAM HITS US UNIVERSITIES AS PHISHING WAVE TRICKS STAFF

A new phishing method hides soft hyphens inside subject lines, scrambling keyword detection while appearing normal to users. (Photo by Silas Stein/picture alliance via Getty Images)

9 steps you can take to protect yourself from this phishing scam

You do not need to become a security expert to stay safe. A few habits, paired with the right tools, can shut down most phishing attempts before they have a chance to work.

1) Use a password manager

A password manager helps you create strong, unique passwords for every account. Even if a phishing email fools you, the attacker cannot use your password elsewhere because each one is different. Most password managers also warn you when a site looks suspicious.

Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. 

Advertisement

Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.

2) Enable two-factor authentication

Turning on 2FA adds a second step to your login process. Even if someone steals your password, they still need the verification code on your phone. This stops most phishing attempts from going any further.

3) Install a reliable antivirus software

Strong antivirus software does more than scan for malware. Many can flag unsafe pages, block suspicious redirects and warn you before you enter your details on a fake login page. It is a simple layer of protection that helps a lot when an email slips past filters.

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

Advertisement

4) Limit your personal data online

Attackers often tailor phishing messages using information they find about you. Reducing your digital footprint makes it harder for them to craft emails that feel convincing. You can use personal data removal services to clean up exposed details and old database leaks.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

AI FLAW LEAKED GMAIL DATA BEFORE OPENAI PATCH

Researchers warn that attackers are bypassing email defenses by manipulating encoded subject lines with unseen characters. (Photo by Lisa Forster/picture alliance via Getty Images)

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Advertisement

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com

5) Check sender details carefully

Do not rely on the display name. Always check the full email address. Attackers often tweak domain names by a single letter or symbol. If something feels off, open the site manually instead of clicking any link inside the email.

6) Never reset passwords through email links

If you get an email claiming your password will expire, do not click the link. Go to the website directly and check your account settings. Phishing emails rely on urgency. Slowing down and confirming the issue yourself removes that pressure.

7) Keep your software and browser updated

Updates often include security fixes that help block malicious scripts and unsafe redirects. Attackers take advantage of older systems because they are easier to trick. Staying updated keeps you ahead of known weaknesses.

8) Turn on advanced spam filtering or “strict” filtering

Many email providers (Gmail, Outlook, Yahoo) allow you to tighten spam filtering settings. This won’t catch every soft-hyphen scam, but it improves your odds and reduces risky emails overall.

Advertisement

9) Use a browser with anti-phishing protection

Chrome, Safari, Firefox, Brave, and Edge all include anti-phishing checks. This adds another safety net if you accidentally click a bad link.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt’s key takeaway

Phishing attacks are changing fast, and tricks like invisible characters show how creative attackers are getting. It’s safe to say filters and scanners are also improving, but they cannot catch everything, especially when the text they see is not the same as what you see. Staying safe comes down to a mix of good habits, the right tools, and a little skepticism whenever an email pushes you to act quickly. If you slow down, double-check the details, and follow the steps that strengthen your accounts, you make it much harder for anyone to fool you.

Do you trust your email filters, or do you double-check suspicious messages yourself? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

Copyright 2025 CyberGuy.com.  All rights reserved.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Google makes it easy to deepfake yourself

Published

on

Google makes it easy to deepfake yourself

YouTube Shorts is rolling out a new AI-powered feature giving creators an easy way to realistically clone themselves on camera. The launch, hinted at earlier this year, reflects the platform’s fraught relationship with AI-generated content, adding more generative features while struggling to contain AI slop, deepfake scams, and impersonations.

YouTube says the new tool will let users create a digital version of themselves, called an avatar, that can be inserted into existing Shorts videos or used to generate entirely new ones. The company said avatars will “look and sound like you,” framing them as a safer and more secure way to use AI to create new content.

Creating an avatar is a bit more involved than simply pressing a button, but it sounds fairly straightforward. In a blog post outlining the process, YouTube said users must first record a “live selfie” capturing their face and voice while following a series of prompts. For the best results, the company recommends good lighting, a quiet area, a background free of other people or images of faces, and holding the phone at eye level.

Once avatars are made, users can select “make a video with my avatar” while creating a video to generate a clip from prompts, which can be up to eight seconds long, according to 9to5google. Users can also add their avatar to “eligible Shorts” in their feed, though YouTube did not specify what makes a Short eligible.

The AI avatar feature comes with fairly tight restrictions. They can only be used in the creator’s own original videos, who also control whether their Shorts can be remixed. The creator can delete their avatar or videos where it appears at any time, YouTube says. Avatars that aren’t used to create new content for three years will be automatically deleted.

Advertisement

Not everyone will be able to use the feature immediately. YouTube says the tool “will be rolling out gradually,” though it did not give a timeline or indication of where it will be available first. Creators must also be at least 18 and own an existing YouTube channel, the company says.

Its arrival comes as one of Google’s main AI rivals, OpenAI, pulls back from video generation. The startup said it was sunsetting its Sora video tool last month after a year of struggling to get the wannabe social platform off the ground. It was costly and faced a parade of copyright challenges, deepfake controversies, and slop that made it an unattractive bet for investors ahead of an anticipated IPO this year.

Continue Reading

Technology

Apple Pay text scam almost cost her $15,000

Published

on

Apple Pay text scam almost cost her ,000

NEWYou can now listen to Fox News articles!

You see a charge you don’t recognize. It looks like it came from a trusted brand. Your instinct kicks in. You want to fix it quickly and move on. That’s exactly what happened to Dorothy.

After a simple text, she found herself on the phone with someone who sounded official, confident and completely convincing. Here’s how she described it:

“I received a text from APPLE Pay, which I don’t even use… It said an Apple Store in CA wants to charge me $144… If I have questions, I should call. DUH! I called and was speaking with the scammer.”

“I received a text from APPLE Pay, which I don’t even use… It said an Apple Store in CA wants to charge me $144… If I have questions, I should call. DUH! I called and was speaking with the scammer.”

— Dorothy

Advertisement

Within minutes, the situation escalated.

“He knew everything about me… He said I should take out $15,000… He said he was working with the FBI and the FDIC.”

That’s when the pressure really started. Dorothy told me this story when she joined me on my Beyond Connected podcast, and what happened next shows just how far these scams can go.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

10 WAYS TO PROTECT SENIORS FROM EMAIL SCAMS

Advertisement

The text sent to Dorothy shows how a fake Apple Pay alert uses urgency and a phone number to pull you into a scam. (Kurt “CyberGuy” Knutsson)

How this Apple Pay text scam actually works

This scam follows a pattern that is becoming more common. It combines a fake alert with a live phone call designed to build trust fast.

Here’s what is happening behind the scenes:

Step 1: The fake charge alert

You get a text about a suspicious charge. It looks urgent. It often includes a number to call.

Step 2: You call the scammer

The number connects you directly to a criminal. They pose as Apple, your bank or even law enforcement.

Advertisement

Step 3: They build credibility

They may know your name, address or bank. That information often comes from past data breaches.

Step 4: They create fear and urgency

You are told your money is at risk. You need to act immediately.

Step 5: They control your next move

In Dorothy’s case, the scammer told her to withdraw $15,000 and lie to her bank about why.

“He said he would stay on the phone with me while I drove to the bank… If anyone asked, I should say I was buying a car.”

That is a major red flag.

Advertisement

PHISHING SCAM EXPLOITS APPLE MAIL ‘TRUSTED SENDER’ LABEL

Once you call, scammers pose as trusted companies or agencies and pressure you to act quickly. (Kurt “CyberGuy” Knutsson)

The moment everything could have gone wrong

Dorothy drove to the bank with the scammer still on the phone. This is exactly what criminals want. They try to isolate you and keep control of the situation.

But something didn’t feel right.

“When I got to the bank, I recognized one of the employees and told her that I was uncomfortable… She said to hang up immediately.”

Advertisement

That decision changed everything.

The bank confirmed it was a scam. The calls kept coming from different numbers. Dorothy blocked them all. Fortunately, no money was lost.

Why the Apple Pay text scam feels so real

Scammers are getting better at one thing. They make you feel like you are solving a problem, not being scammed.

Here’s why this one works so well:

  • It uses a trusted name like Apple Pay
  • It creates urgency with a fake charge
  • It moves quickly to a live conversation
  • It uses real personal details to build trust
  • It pressures you to act before you think

They also add authority. Claiming ties to the FBI or FDIC makes people feel like they must comply. In reality, no legitimate agency will ever ask you to move money this way.

The biggest red flags to watch for

If you remember nothing else, remember these:

Advertisement
  • A text about a charge that tells you to call a number
  • Someone is asking you to withdraw large amounts of cash
  • Instructions to lie to your bank or keep a secret
  • Claims that your money needs to be “protected”
  • Pressure to act immediately

Each one is a warning sign. Together, they confirm it is a scam.

The biggest red flag is being told to move money or keep secrets from your bank or family. (Kurt “CyberGuy” Knutsson)

How to stay safe from Apple Pay text scams

You do not need to outsmart scammers. You just need to slow the situation down.

1) Never trust the number in the message

If you get a suspicious text, do not call the number provided. Look up the official number yourself.

2) Pause before you act

Scammers rely on urgency. Take a moment. Real companies will not rush you like this.

3) Never move money on someone else’s instructions

No bank, tech company or government agency will ask you to withdraw cash to “protect” it.

Advertisement

4) Use strong antivirus software

Strong antivirus software can help detect malicious links, block scam websites and warn you before you engage with risky content. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

5) Remove your personal data from the web

Scammers often use data from breaches to sound convincing. A data removal service can help reduce your exposure and limit what criminals can find about you online. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

6) Talk to someone you trust

A quick conversation with a friend, family member or bank employee can stop a scam cold.

7) Add extra protection

Consider identity monitoring services that alert you if your information is being misused. See my tips and best picks on Best Identity Theft Protection at Cyberguy.com.

What to do if this happens to you

Even if you did not lose money, take a few steps right away:

Advertisement
  • Contact your bank using the number on your card
  • Place a fraud alert on your credit
  • Consider freezing your credit
  • Monitor your accounts closely
  • Block any follow-up calls or texts

These steps help protect you from future attempts.

What this means for you

This scam did not begin with a complex hack. Instead, it started with a simple text. That is what makes it so dangerous. At first, it looks routine. Then urgency takes over. As a result, anyone can feel pressured to act quickly and without thinking.

In many cases, the situation feels real. That is how people get pulled into a conversation that seems legitimate. In Dorothy’s case, she trusted her instincts at the right moment. Because of that decision, fortunately, she did not lose $15,000.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt’s key takeaways

Scammers target more than technology. They focus on human behavior. They create pressure, build trust and keep you engaged long enough to make a mistake. However, you can break the cycle. A single pause can disrupt the scam. Asking one question can expose it. Even a quick conversation with someone you trust can stop it. If you’d like to hear more of Dorothy’s story, you can catch our full conversation on my Beyond Connected podcast at getbeyondconnected.com/

If you got a text like this right now, would you pause or would you call? Let us know by writing to us at Cyberguy.com.

Advertisement

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

OpenAI made economic proposals — here’s what DC thinks of them

Published

on

OpenAI made economic proposals — here’s what DC thinks of them

Happy ceasefire day and welcome to Regulator, a newsletter for Verge subscribers about Big Tech’s rocky journey through the world of politics. If you’re not a subscriber yet, you can do so here, but my only request is that you sign up before Donald Trump decides to revisit his previous threats toward Iran and kickstart World War III.

I’m back after being waylaid last week by the deadly combo of a moderate cold and the beginning of pollen season. (Twenty-one percent of the District’s acreage is taken up by public green space, and DC is consistently ranked the best city park system in America. Unfortunately, I am allergic to every tree and grass.) If you’ve got tips on anything I may have missed or anything I should know about the upcoming weeks, send ’em to tina.nguyen+tips@theverge.com.

Do you actually believe anything OpenAI says?

On Monday, OpenAI published a 13-page policy paper addressing the impact that artificial intelligence would have on the American workforce. The company also proposed what it believed was the solution: putting higher capital gains taxes on corporations replacing their workers with AI and using that money to create a bigger public safety net. Its solutions included a public wealth fund, a four-day workweek funded by “efficiency dividends,” and government programs to help transition workers into “human-centered” work, all financed by the abundance that artificial intelligence would deliver.

Unfortunately, it was released the day that The New Yorker’s Ronan Farrow and Andrew Marantz published a meticulously reported, 17,000-word-plus article chronicling Sam Altman’s history of lying to everyone around him, including to his Silicon Valley backers, his employees, his board, and — relevant in this case — lawmakers trying to regulate AI. The New Yorker article reinforced a long-standing narrative about Altman, and OpenAI by extension: They may spout idealistic values, but would quickly jettison them for financial and political gains.

Advertisement

On its own, said several people I spoke to, the paper was a net positive to AI governance overall, in that it introduced new ideas into the political discourse around the emerging technology. But unless the company’s policy and political influence made good on those promises, said OpenAI’s critics, it may as well just be a piece of paper.

“My guess is that there are people on the team who care about the stuff, who’ve thought really hard about this document and are proud of it, and did good work, even if it’s not addressing all of the questions that I wish it would address,” Malo Bourgon, the CEO of the Machine Intelligence Research Institute (MIRI), told me. “And there’s still the question of: Are those people gonna find themselves in the position that many previous people at OpenAI have found themselves in, where they thought the company had certain values or aligned with things they cared about, and then ended up finding out that wasn’t the case, becoming disenchanted and leaving?”

With OpenAI proposing policy, it’s worth looking back at its history with the government, which the New Yorker piece details in depth. Altman had been one of the first major CEOs to publicly advocate for federal oversight for AI, going so far as to propose a federal agency to oversee advanced models in 2023 — but privately he worked to suppress the laws containing his own safety proposals. A state legislative aide in California accused OpenAI of engaging in “increasingly cunning, deceptive behavior” to kill a 2023 AI safety bill that it was publicly supporting. In 2025, the company subpoenaed supporters of a California state-level AI bill in an effort to, as one such supporter put it to The New Yorker, “basically scare them into shutting up.” And though Altman had once worked extensively with the Biden administration to build AI safety standards, the moment that Donald Trump became president, Altman successfully persuaded him to kill the initiatives he’d once advocated for.

Nathan Calvin, the general counsel at Encode, an AI policy nonprofit where he focuses on state legislative initiatives, had received one of those subpoenas. “What I’ve seen from their policy and government affairs engagement has just been abysmal,” he told me. While he believed that the team who’d written the OpenAI proposal, primarily from the technical safety research side, was acting with good intentions, he was still reserving judgment. “Will those folks remain engaged as we move from general policy principles towards the many other ways in which lobbying and government influence actually happens? Part of me is hopeful, but a lot of me is also quite skeptical about whether that will happen.” (OpenAI did not return a request for comment.)

A modest, absolutely not craven request:

Advertisement

Next week I plan on running an issue of Regulator cataloging the nerdiest events happening during Nerd Prom, aka the White House Correspondents’ Dinner party circuit. If you’re a tech founder, tech company, or someone that does something related to technology and you’re throwing an event during WHCD week, please let me know what you’re up to! From what I’ve heard so far, the tech world is about to shake up the normal social dynamics of the week — I’ve already caught wind of the Grindr party in Georgetown, and the Substack party, which famed looksmaxxer Clavicular is attending — and I’m so, so excited to pull together the most bonkers “SPOTTED” column that Washington’s ever experienced.

(Again, this is contingent upon whether we’re at war with Iran by the end of April, in which case, I imagine no one will be up for frivolity.)

Speaking of DC reporters, this is very true of all of us:

Screenshot via @jakewilkns/X.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Advertisement

Advertisement
Continue Reading

Trending