Technology
New email scam uses hidden characters to slip past filters
NEWYou can now listen to Fox News articles!
Cybercriminals keep finding new angles to get your attention, and email remains one of their favorite tools. Over the years, you have probably seen everything from fake courier notices to AI-generated scams that feel surprisingly polished. Filters have improved, but attackers have learned to adapt. The latest technique takes aim at something you rarely think about: the subject line itself. Researchers have found a method that hides tiny, invisible characters inside the subject so automated systems fail to flag the message. It sounds subtle, but it is quickly becoming a serious problem.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
NEW SCAM SENDS FAKE MICROSOFT 365 LOGIN PAGES
Cybercriminals are using invisible Unicode characters to disguise phishing email subject lines, allowing dangerous scams to slip past filters. (Photo by Donato Fasano/Getty Images)
How the new trick works
Researchers recently uncovered phishing campaigns that embed soft hyphens between every letter of an email subject. These are invisible Unicode characters that normally help with text formatting. They do not show up in your inbox, but they completely throw off keyword-based filters. Attackers use MIME encoded-word formatting to slip these characters into the subject. By encoding it in UTF-8 and Base64, they can weave these hidden characters through the entire phrase.
One analyzed email decoded to “Your Password is About to Expire” with a soft hyphen tucked between every character. To you, it looks normal. To a security filter, it looks scrambled, with no clear keyword to match. The attackers then use the same trick in the body of the email, so both layers slide through detection. The link leads to a fake login page sitting on a compromised domain, designed to harvest your credentials.
If you have ever tried spotting a phishing email, this one still follows the usual script. It builds urgency, claims something is about to expire and points you to a login page. The difference is in how neatly it dodges the filters you trust.
Why this phishing technique is super dangerous
Most phishing filters rely on pattern recognition. They look for suspicious words, common phrases and structure. They also scan for known malicious domains. By splitting every character with invisible symbols, attackers break up these patterns. The text becomes readable for you but unreadable for automated systems. This creates a quiet loophole where old phishing templates suddenly become effective again.
The worrying part is how easy this method is to copy. The tools needed to encode these messages are widely available. Attackers can automate the process and churn out bulk campaigns with little extra effort. Since the characters are invisible in most email clients, even tech-savvy users do not notice anything odd at first glance.
Security researchers point out that this method has appeared in email bodies for years, but using it in the subject line is less common. That makes it harder for existing filters to catch. Subject lines also play a key role in shaping your first impression. If the subject looks familiar and urgent, you are more likely to open the email, which gives the attacker a head start.
How to spot a phishing email before you click
Phishing emails often look legitimate, but the links inside them tell a different story. Scammers hide dangerous URLs behind familiar-looking text, hoping you will click without checking. One safe way to preview a link is by using a private email service that shows the real destination before your browser loads it.
Our top-rated private email provider recommendation includes malicious link protection that reveals full URLs before opening them. This gives you a clear view of where a link leads before anything can harm your device. It also offers strong privacy features like no ads, no tracking, encrypted messages and unlimited disposable aliases.
For recommendations on private and secure email providers, visit Cyberguy.com
PAYROLL SCAM HITS US UNIVERSITIES AS PHISHING WAVE TRICKS STAFF
A new phishing method hides soft hyphens inside subject lines, scrambling keyword detection while appearing normal to users. (Photo by Silas Stein/picture alliance via Getty Images)
9 steps you can take to protect yourself from this phishing scam
You do not need to become a security expert to stay safe. A few habits, paired with the right tools, can shut down most phishing attempts before they have a chance to work.
1) Use a password manager
A password manager helps you create strong, unique passwords for every account. Even if a phishing email fools you, the attacker cannot use your password elsewhere because each one is different. Most password managers also warn you when a site looks suspicious.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
2) Enable two-factor authentication
Turning on 2FA adds a second step to your login process. Even if someone steals your password, they still need the verification code on your phone. This stops most phishing attempts from going any further.
3) Install a reliable antivirus software
Strong antivirus software does more than scan for malware. Many can flag unsafe pages, block suspicious redirects and warn you before you enter your details on a fake login page. It is a simple layer of protection that helps a lot when an email slips past filters.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
4) Limit your personal data online
Attackers often tailor phishing messages using information they find about you. Reducing your digital footprint makes it harder for them to craft emails that feel convincing. You can use personal data removal services to clean up exposed details and old database leaks.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
AI FLAW LEAKED GMAIL DATA BEFORE OPENAI PATCH
Researchers warn that attackers are bypassing email defenses by manipulating encoded subject lines with unseen characters. (Photo by Lisa Forster/picture alliance via Getty Images)
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com
5) Check sender details carefully
Do not rely on the display name. Always check the full email address. Attackers often tweak domain names by a single letter or symbol. If something feels off, open the site manually instead of clicking any link inside the email.
6) Never reset passwords through email links
If you get an email claiming your password will expire, do not click the link. Go to the website directly and check your account settings. Phishing emails rely on urgency. Slowing down and confirming the issue yourself removes that pressure.
7) Keep your software and browser updated
Updates often include security fixes that help block malicious scripts and unsafe redirects. Attackers take advantage of older systems because they are easier to trick. Staying updated keeps you ahead of known weaknesses.
8) Turn on advanced spam filtering or “strict” filtering
Many email providers (Gmail, Outlook, Yahoo) allow you to tighten spam filtering settings. This won’t catch every soft-hyphen scam, but it improves your odds and reduces risky emails overall.
9) Use a browser with anti-phishing protection
Chrome, Safari, Firefox, Brave, and Edge all include anti-phishing checks. This adds another safety net if you accidentally click a bad link.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaway
Phishing attacks are changing fast, and tricks like invisible characters show how creative attackers are getting. It’s safe to say filters and scanners are also improving, but they cannot catch everything, especially when the text they see is not the same as what you see. Staying safe comes down to a mix of good habits, the right tools, and a little skepticism whenever an email pushes you to act quickly. If you slow down, double-check the details, and follow the steps that strengthen your accounts, you make it much harder for anyone to fool you.
Do you trust your email filters, or do you double-check suspicious messages yourself? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
LG’s CLOiD robot can load the washer for you, slowly
LG’s CLOiD robot took the stage at CES 2026 on Monday, offering our first look at the bot in action. During LG’s keynote, the company showed how CLOiD can load your washer or dryer — albeit slowly – as part of its goal of creating a “zero labor home.”
CLOiD waved both of its five-finger hands as it rolled out on stage. Brandt Varner, LG’s vice president of sales in its home appliances division, followed behind and asked the bot to take care of the wet towel he was holding. “Sure, I’ll get the laundry started,” CLOiD said in a masculine-sounding voice. “Let me show everyone what I can do.”
The bot’s animated eyes “blinked” as it rolled closer to a washer that opened automatically (I hope CLOiD can open that door itself!), extending its left arm into the washer and dropping the towel into the drum. The whole process — from getting the towel to putting it in the machine — took nearly 30 seconds, which makes me wonder how long it would take to load a week’s worth of laundry.
The bot returned later in the keynote to bring a bottle of water to another presenter, Steve Scarbrough, the senior vice president of LG’s HVAC division. “I noticed by your voice and tone that you might want some water,” it said before handing over the bottle and giving Scarbrough a fist bump.
There’s still no word on when, or if, LG CLOiD will ever be available for purchase, but at least we’ll have WALL-E’s weird cousin to help out with some tasks around the home.
Technology
Can AI chatbots trigger psychosis in vulnerable people?
NEWYou can now listen to Fox News articles!
Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.
Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
What psychiatrists are seeing in patients using AI chatbots
Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.
OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN
Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)
Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.
Why AI chatbot conversations feel different from past technology
Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating.
For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.
How AI chatbots can reinforce false or delusional beliefs
Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.
Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN
Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)
What research and case reports reveal about AI chatbots
Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.
A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.
What AI companies say about mental health risks
OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.
Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.
What this means for everyday AI chatbot use
Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.
I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS
Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)
Tips for using AI chatbots more safely
Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.
- Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
- Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
- Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
- Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
- Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.
If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.
As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Birdbuddy’s new smart feeders aim to make spotting birds easier, even for beginners
Birdbuddy is introducing two new smart bird feeders: the flagship Birdbuddy 2 and the more compact, cheaper Birdbuddy 2 Mini aimed at first-time users and smaller outdoor spaces. Both models are designed to be faster and easier to use than previous generations, with upgraded cameras that can shoot in portrait or landscape and wake instantly when a bird lands so you’re less likely to miss the good stuff.
The Birdbuddy 2 costs $199 and features a redesigned circular camera housing that delivers 2K HDR video, slow-motion recording, and a wider 135-degree field of view. The upgraded built-in mic should also better pick up birdsong, which could make identifying species easier using both sound and sight.
The feeder itself offers a larger seed capacity and an integrated perch extender, along with support for both 2.4GHz and 5GHz Wi-Fi for more stable connectivity. The new model also adds dual integrated solar panels to help keep it powered throughout the day, while adding a night sleep mode to conserve power.
The Birdbuddy 2 Mini is designed to deliver the same core AI bird identification and camera experience, but in a smaller, more accessible package. At 6.95 inches tall with a smaller seed capacity, it’s geared toward first-time smart birders and smaller outdoor spaces like balconies, and it supports an optional solar panel.
Birdbuddy 2’s first batch of preorders has already sold out, with shipments expected in February 2026 and wider availability set for mid-2026. Meanwhile, the Birdbuddy 2 Mini will be available to preorder for $129 in mid-2026, with the company planning on shipping the smart bird feeder in late 2026.
-
World1 week agoHamas builds new terror regime in Gaza, recruiting teens amid problematic election
-
Business1 week agoGoogle is at last letting users swap out embarrassing Gmail addresses without losing their data
-
Indianapolis, IN1 week agoIndianapolis Colts playoffs: Updated elimination scenario, AFC standings, playoff picture for Week 17
-
Southeast1 week agoTwo attorneys vanish during Florida fishing trip as ‘heartbroken’ wife pleads for help finding them
-
News1 week agoRoads could remain slick, icy Saturday morning in Philadelphia area, tracking another storm on the way
-
Politics1 week agoMost shocking examples of Chinese espionage uncovered by the US this year: ‘Just the tip of the iceberg’
-
News1 week agoMarijuana rescheduling would bring some immediate changes, but others will take time
-
World1 week agoPodcast: The 2025 EU-US relationship explained simply