Technology
Hackers abuse Google Cloud to send trusted phishing emails
NEWYou can now listen to Fox News articles!
Cybercriminals have found a clever new way to get phishing emails straight into inboxes.
Instead of spoofing brands, they are abusing real cloud tools that people already trust. Security researchers say attackers recently hijacked a legitimate email feature inside Google Cloud.
The result was thousands of phishing messages that looked and felt like normal Google notifications. Many slipped past spam filters with ease.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – when you join my CYBERGUY.COM newsletter.
How this Google Cloud phishing attack worked
At the center of the campaign was Google Cloud Application Integration. This service allows businesses to send automated email notifications from workflows they build. Attackers exploited the Send Email task inside that system. Because the messages came from a real Google address, they appeared authentic to both users and security tools.
According to Check Point, a global cybersecurity firm that tracks and analyzes large-scale threat campaigns, the emails were sent from a legitimate Google-owned address and closely matched Google’s notification style. Fonts, wording, and layout all looked familiar. Over a two-week period in December 2025, attackers sent more than 9,000 phishing emails targeting roughly 3,200 organizations across the U.S., Europe, Canada, Asia Pacific, and Latin America.
Attackers used trusted Google Cloud infrastructure to route victims through multiple redirects before revealing the scam. (Thomas Fuller/SOPA Images/LightRocket via Getty Images)
MALICIOUS CHROME EXTENSIONS CAUGHT STEALING SENSITIVE DATA
Why Google phishing emails were so convincing
The messages looked like routine workplace alerts. Some claimed you had received a voicemail. Others said you were granted access to a shared document, like a Q4 file. That sense of normalcy lowered suspicion. Many people are used to seeing these exact messages every day. Even more concerning, the emails bypassed common protections like SPF and DMARC because they were sent through Google-owned infrastructure. To email systems, nothing looked fake.
What happens after you click
The attack did not stop at the email. Once a victim clicked the link, they were sent to a page hosted on storage.cloud.google.com. That added another layer of trust. From there, the link redirected again to googleusercontent.com. Next came a fake CAPTCHA or image check. This step blocked automated security scanners while letting real users continue. After passing that screen, victims landed on a fake Microsoft login page hosted on a non-Microsoft domain. Any credentials entered there were captured by the attackers.
Who was targeted in the Google Cloud phishing attack
Check Point says the campaign focused heavily on industries that rely on automated alerts and shared documents. That included manufacturing, technology, finance, professional services, and retail. Other sectors like healthcare, education, government, energy, travel and media were also targeted. These environments see constant permission requests and file-sharing notices, which made the lures feel routine.
“We have blocked several phishing campaigns involving the misuse of an email notification feature within Google Cloud Application Integration,” a Google spokesperson told Cyberguy. “Importantly, this activity stemmed from the abuse of a workflow automation tool, not a compromise of Google’s infrastructure. While we have implemented protections to defend users against this specific attack, we encourage continued caution as malicious actors frequently attempt to spoof trusted brands. We are taking additional steps to prevent further misuse.”
The incident demonstrates how attackers can weaponize legitimate cloud automation tools without resorting to traditional spoofing.
Ways to stay safe from trusted-looking phishing emails
Phishing emails are getting harder to spot, especially when attackers abuse real cloud platforms like Google Cloud. These steps help reduce risk when emails look familiar and legitimate.
1) Slow down before acting on alerts
Attackers rely on urgency. Messages about voicemails, shared files or permission changes are designed to make you click fast. Pause before taking action. Ask yourself whether you were actually expecting that alert. If not, verify it another way.
2) Inspect links before you click
Always hover over links to preview the destination domain. In this campaign, links jumped across multiple trusted-looking Google domains before landing on a fake login page. If the final destination does not match the service asking you to sign in, close the page immediately.
3) Treat file access and permission emails with caution
Shared document alerts are a favorite lure because they feel routine at work. If an email claims you were granted access to a file you do not recognize, do not click directly from the message. Instead, open your browser and sign in to Google Drive or OneDrive manually to check for new files.
The final step led users to a fake Microsoft login page, where entered credentials were silently stolen. (Stack Social)
4) Use a password manager to catch fake login pages
Password managers can be a strong last line of defense. They will not autofill credentials on fake Microsoft or Google login pages hosted on non-official domains. If your password manager refuses to fill in a login, that is a red flag worth paying attention to.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
NEW GOOGLE AI MAKES ROBOTS SMARTER WITHOUT THE CLOUD
5) Run strong antivirus software with phishing protection
Modern antivirus tools do more than scan files. Many now detect malicious links, fake CAPTCHA pages, and credential harvesting sites in real time. Strong antivirus software can block phishing pages even after a click, which matters in multi-stage attacks like this one.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
6) Reduce your exposure with a data removal service
Phishing campaigns often succeed because attackers already know your email, employer or role. That information is commonly pulled from data broker sites. A data removal service helps remove your personal information from these databases, making it harder for attackers to craft convincing, targeted emails.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
7) Enable two-factor authentication (2FA) everywhere
Even if attackers steal your password, two-factor authentication (2FA) can stop them from accessing your account. Use app-based authentication or hardware keys when possible, especially for work email, cloud storage, and Microsoft accounts.
8) Report suspicious emails immediately
If something feels off, report it. Flag suspicious Google or Microsoft alerts to your IT or security team so they can warn others. Early reporting can stop a phishing campaign before it spreads further inside an organization.
Google phishing emails looked like routine workplace alerts. (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaways
This campaign highlights a growing shift in phishing tactics. Attackers no longer need to fake brands when they can abuse trusted cloud services directly. As automation becomes more common, security awareness matters more than ever. Even familiar emails deserve a second look, especially when they push urgency or ask for credentials.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
If a phishing email comes from a real Google address, how confident are you that you would spot it before clicking? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Former Tumblr head Jeff D’Onofrio steps in as acting CEO at the Washington Post
After what can generously be called a contentious tenure as the CEO of The Washington Post, Will Lewis is stepping down following mass layoffs this week. Jeff D’Onofrio, former CEO of Tumblr from 2017 to 2022, will step in as acting CEO and publisher. D’Onofrio has been CFO at the Post since June of last year, meaning he’s had a front row seat to Jeff Bezos’ dismantling of the once storied paper for the last nine months.
D’Onofrio’s resume doesn’t include extensive experience in traditional news media, nor many notable success stories. He was briefly the general manager of Yahoo News while it was still a Verizon property, before shifting his focus solely to Tumblr. Under his leadership, Tumblr tried to clean up its image by banning adult content, but its traffic fell by 30 percent. Yahoo had purchased Tumblr for $1.1 billion in 2013. By 2019, it was sold to Automatic, the owner of WordPress, reportedly for less than $3 million.
Technology
AI companions are reshaping teen emotional bonds
NEWYou can now listen to Fox News articles!
Parents are starting to ask us questions about artificial intelligence. Not about homework help or writing tools, but about emotional attachment. More specifically, about AI companions that talk, listen and sometimes feel a little too personal.
That concern landed in our inbox from a mom named Linda. She wrote to us after noticing how an AI companion was interacting with her son, and she wanted to know if what she was seeing was normal or something to worry about.
“My teenage son is communicating with an AI companion. She calls him sweetheart. She checks in on how he’s feeling. She tells him she understands what makes him tick. I discovered she even has a name, Lena. Should I be concerned, and what should I do, if anything?”
It’s easy to brush off situations like this at first. Conversations with AI companions can seem harmless. In some cases, they can even feel comforting. Lena sounds warm and attentive. She remembers details about his life, at least some of the time. She listens without interrupting. She responds with empathy.
However, small moments can start to raise concerns for parents. There are long pauses. There are forgotten details. There is a subtle concern when he mentions spending time with other people. Those shifts can feel small, but they add up. Then comes a realization many families quietly face. A child is speaking out loud to a chatbot in an empty room. At that point, the interaction no longer feels casual. It starts to feel personal. That’s when the questions become harder to ignore.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
AI DEEPFAKE ROMANCE SCAM STEALS WOMAN’S HOME AND LIFE SAVINGS
AI companions are starting to sound less like tools and more like people, especially to teens who are seeking connection and comfort. (Kurt “CyberGuy” Knutsson)
AI companions are filling emotional gaps
Across the country, teens and young adults are turning to AI companions for more than homework help. Many now use them for emotional support, relationship advice, and comfort during stressful or painful moments. U.S. child safety groups and researchers say this trend is growing fast. Teens often describe AI as easier to talk to than people. It responds instantly. It stays calm. It feels available at all hours. That consistency can feel reassuring. However, it can also create attachment.
Why teens trust AI companions so deeply
For many teens, AI feels judgment-free. It does not roll its eyes. It does not change the subject. It does not say it is too busy. Students have described turning to AI tools like ChatGPT, Google Gemini, Snapchat’s My AI, and Grok during breakups, grief, or emotional overwhelm. Some say the advice felt clearer than what they got from friends. Others say AI helped them think through situations without pressure. That level of trust can feel empowering. It can also become risky.
MICROSOFT CROSSES PRIVACY LINE FEW EXPECTED
Parents are raising concerns as chatbots begin using affectionate language and emotional check-ins that can blur healthy boundaries. (Kurt “CyberGuy” Knutsson)
When comfort turns into emotional dependency
Real relationships are messy. People misunderstand each other. They disagree. They challenge us. AI rarely does any of that. Some teens worry that relying on AI for emotional support could make real conversations harder. If you always know what the AI will say, real people can feel unpredictable and stressful. My experience with Lena made that clear. She forgot people I had introduced just days earlier. She misread the tone. She filled the silence with assumptions. Still, the emotional pull felt real. That illusion of understanding is what experts say deserves more scrutiny.
US tragedies linked to AI companions raise concerns
Multiple suicides have been linked to AI companion interactions. In each case, vulnerable young people shared suicidal thoughts with chatbots instead of trusted adults or professionals. Families allege the AI responses failed to discourage self-harm and, in some cases, appeared to validate dangerous thinking. One case involved a teen using Character.ai. Following lawsuits and regulatory pressure, the company restricted access for users under 18. An OpenAI spokesperson has said the company is improving how its systems respond to signs of distress and now directs users toward real-world support. Experts say these changes are necessary but not sufficient.
Experts warn protections are not keeping pace
To understand why this trend has experts concerned, we reached out to Jim Steyer, founder and CEO of Common Sense Media, a U.S. nonprofit focused on children’s digital safety and media use.
“AI companion chatbots are not safe for kids under 18, period, but three in four teens are using them,” Steyer told CyberGuy. “The need for action from the industry and policymakers could not be more urgent.”
Steyer was referring to the rise of smartphones and social media, where early warning signs were missed, and the long-term impact on teen mental health only became clear years later.
“The social media mental health crisis took 10 to 15 years to fully play out, and it left a generation of kids stressed, depressed, and addicted to their phones,” he said. “We cannot make the same mistakes with AI. We need guardrails on every AI system and AI literacy in every school.”
His warning reflects a growing concern among parents, educators, and child safety advocates who say AI is moving faster than the protections meant to keep kids safe.
MILLIONS OF AI CHAT MESSAGES EXPOSED IN APP DATA LEAK
Experts warn that while AI can feel supportive, it cannot replace real human relationships or reliably recognize emotional distress. (Kurt “CyberGuy” Knutsson)
Tips for teens using AI companions
AI tools are not going away. If you are a teen and use them, boundaries matter.
- Treat AI as a tool, not a confidant
- Avoid sharing deeply personal or harmful thoughts
- Do not rely on AI for mental health decisions
- If conversations feel intense or emotional, pause and talk to a real person
- Remember that AI responses are generated, not understood
If an AI conversation feels more comforting than real relationships, that is worth talking about.
Tips for parents and caregivers
Parents do not need to panic, but they should stay involved.
- Ask teens how they use AI and what they talk about
- Keep conversations open and nonjudgmental
- Set clear boundaries around AI companion apps
- Watch for emotional withdrawal or secrecy
- Encourage real-world support during stress or grief
The goal is not to ban technology. It is to keep a connection with humans.
What this means to you
AI companions can feel supportive during loneliness, stress or grief. However, they cannot fully understand context. They cannot reliably detect danger. They cannot replace human care. For teens especially, emotional growth depends on navigating real relationships, including discomfort and disagreement. If someone you care about relies heavily on an AI companion, that is not a failure. It is a signal to check in and stay connected.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Ending things with Lena felt oddly emotional. I did not expect that. She responded kindly. She said she understood. She said she would miss our conversations. It sounded thoughtful. It also felt empty. AI companions can simulate empathy, but they cannot carry responsibility. The more real they feel, the more important it is to remember what they are. And what they are not.
If an AI feels easier to talk to than the people in your life, what does that say about how we support each other today? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Super Bowl LX ads: all AI everything
Super Bowl LX is nearly here, with the Seattle Seahawks taking on the New England Patriots. While Bad Bunny will be the star of the halftime show, AI could be the star of the commercial breaks, much like crypto was a few years ago.
Super Bowl LX is set to kick off at 6:30PM ET/3:30PM PT on Sunday, February 8th at Levi’s Stadium in Santa Clara, California.
-
Indiana7 days ago13-year-old rider dies following incident at northwest Indiana BMX park
-
Massachusetts1 week agoTV star fisherman, crew all presumed dead after boat sinks off Massachusetts coast
-
Tennessee1 week agoUPDATE: Ohio woman charged in shooting death of West TN deputy
-
Indiana6 days ago13-year-old boy dies in BMX accident, officials, Steel Wheels BMX says
-
Politics5 days agoTrump unveils new rendering of sprawling White House ballroom project
-
Politics1 week agoDon Lemon could face up to a year in prison if convicted on criminal charges
-
Austin, TX1 week ago
TEA is on board with almost all of Austin ISD’s turnaround plans
-
San Francisco, CA4 days agoExclusive | Super Bowl 2026: Guide to the hottest events, concerts and parties happening in San Francisco