You can’t shake a stick without hitting an AI gadget at CES this year, with artificial smarts now embedded in just about every wearable, screen, and appliance across the show floor, not to mention the armies of AI companions, toys, and robots.
Technology
A nightly Waymo robotaxi parking lot honkfest is waking San Francisco neighbors
If you’ve ever wondered what happens to all those self-driving taxis when the world is asleep, one YouTube channel has you covered. Since the beginning of the month, software engineer Sophia Tung has been livestreaming a San Francisco parking lot that Waymo is renting to give its robotaxis somewhere to go during their downtime.
Tung told The Verge via email that the company appeared to “partially” take over the lot on July 28th then later took over the entire lot. Waymo recently opened up its robotaxi service to anyone in San Francisco.
Days later, she set up the livestream, complete with LoFi study beats. Tung told us she’s running it off of a mini PC she had laying around, with a webcam surrounded by a cereal box to reduce glare. Now, any time of day, you can pop in to check out what the Waymo cars are up to. If there aren’t any Waymos in the lot, “the flock will start migrating back” between 7PM and 9PM PST on Sunday through Thursday or 11PM through midnight, Friday and Saturday, says text overlaid on the video.
As I write this, the lot is calm, with just three cars parked in it. But when the lot starts to fill up (which “usually happens at 4AM or so,” according to Tung) what looks like a maddening ballet of autonomous parking — and honking — begins. The noise goes for as much as an hour at a time before it settles down, she said.
Waymo is “aware that in some scenarios our vehicles may briefly honk while navigating our parking lots,” company representative Chris Bonelli told The Verge in an email, adding that Waymo has figured out what’s causing the behavior and is working to fix it.
Tung, who is a self-described micromobility advocate, told The Verge she thinks “generally people are bemused,” and that she likes having the cars there. “Honestly, it’s fun to watch the cars come and go,” she said, adding that “it’s really just the honking that needs to be resolved.”
Technology
Xbox’s Towerborne is switching from a free-to-play game to a paid one
Towerborne, a side-scrolling action RPG published by Xbox Game Studios that has been available in early access, will officially launch on February 26th. But instead of launching as a free-to-play, always-on online game as originally planned, Towerborne is instead going to be a paid game that you can play offline.
“You will own the complete experience permanently, with offline play and online co-op,” Trisha Stouffer, CEO and president of Towerborne developer Stoic, says in an Xbox Wire blog post. “This change required deep structural rebuilding over the past year, transforming systems originally designed around constant connectivity. The result is a stronger, more accessible, and more player-friendly version of Towerborne — one we’re incredibly proud to bring to launch.”
“After listening to our community during Early Access and Game Preview, we learned players wanted a complete, polished experience without ongoing monetization mechanics,” according to an FAQ. “Moving to a premium model lets us deliver the full game upfront—no live-service grind, no pay-to-win systems—just the best version of Towerborne.”
With the popular live service games like Fortnite and Roblox getting harder to usurp, Towerborne’s switch to a premium, offline-playable experience could make it more enticing for players who don’t want to jump into another time-sucking forever game. It makes Towerborne more appealing to me, at least.
With the 1.0 release of the game, Towerborne will have a “complete” story, new bosses, and a “reworked” difficulty system. You’ll also be able to acquire all in-game cosmetics for free through gameplay, with “no more cosmetic purchasing.” Players who are already part of early access will still be able to play the game.
Towerborne will launch on February 26th on Xbox Series X / S, Xbox on PC, Game Pass, Steam, and PS5. The standard edition will cost $24.99, while the deluxe edition will cost $29.99.
Technology
Hackers abuse Google Cloud to send trusted phishing emails
NEWYou can now listen to Fox News articles!
Cybercriminals have found a clever new way to get phishing emails straight into inboxes.
Instead of spoofing brands, they are abusing real cloud tools that people already trust. Security researchers say attackers recently hijacked a legitimate email feature inside Google Cloud.
The result was thousands of phishing messages that looked and felt like normal Google notifications. Many slipped past spam filters with ease.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – when you join my CYBERGUY.COM newsletter.
How this Google Cloud phishing attack worked
At the center of the campaign was Google Cloud Application Integration. This service allows businesses to send automated email notifications from workflows they build. Attackers exploited the Send Email task inside that system. Because the messages came from a real Google address, they appeared authentic to both users and security tools.
According to Check Point, a global cybersecurity firm that tracks and analyzes large-scale threat campaigns, the emails were sent from a legitimate Google-owned address and closely matched Google’s notification style. Fonts, wording, and layout all looked familiar. Over a two-week period in December 2025, attackers sent more than 9,000 phishing emails targeting roughly 3,200 organizations across the U.S., Europe, Canada, Asia Pacific, and Latin America.
Attackers used trusted Google Cloud infrastructure to route victims through multiple redirects before revealing the scam. (Thomas Fuller/SOPA Images/LightRocket via Getty Images)
MALICIOUS CHROME EXTENSIONS CAUGHT STEALING SENSITIVE DATA
Why Google phishing emails were so convincing
The messages looked like routine workplace alerts. Some claimed you had received a voicemail. Others said you were granted access to a shared document, like a Q4 file. That sense of normalcy lowered suspicion. Many people are used to seeing these exact messages every day. Even more concerning, the emails bypassed common protections like SPF and DMARC because they were sent through Google-owned infrastructure. To email systems, nothing looked fake.
What happens after you click
The attack did not stop at the email. Once a victim clicked the link, they were sent to a page hosted on storage.cloud.google.com. That added another layer of trust. From there, the link redirected again to googleusercontent.com. Next came a fake CAPTCHA or image check. This step blocked automated security scanners while letting real users continue. After passing that screen, victims landed on a fake Microsoft login page hosted on a non-Microsoft domain. Any credentials entered there were captured by the attackers.
Who was targeted in the Google Cloud phishing attack
Check Point says the campaign focused heavily on industries that rely on automated alerts and shared documents. That included manufacturing, technology, finance, professional services, and retail. Other sectors like healthcare, education, government, energy, travel and media were also targeted. These environments see constant permission requests and file-sharing notices, which made the lures feel routine.
“We have blocked several phishing campaigns involving the misuse of an email notification feature within Google Cloud Application Integration,” a Google spokesperson told Cyberguy. “Importantly, this activity stemmed from the abuse of a workflow automation tool, not a compromise of Google’s infrastructure. While we have implemented protections to defend users against this specific attack, we encourage continued caution as malicious actors frequently attempt to spoof trusted brands. We are taking additional steps to prevent further misuse.”
The incident demonstrates how attackers can weaponize legitimate cloud automation tools without resorting to traditional spoofing.
Ways to stay safe from trusted-looking phishing emails
Phishing emails are getting harder to spot, especially when attackers abuse real cloud platforms like Google Cloud. These steps help reduce risk when emails look familiar and legitimate.
1) Slow down before acting on alerts
Attackers rely on urgency. Messages about voicemails, shared files or permission changes are designed to make you click fast. Pause before taking action. Ask yourself whether you were actually expecting that alert. If not, verify it another way.
2) Inspect links before you click
Always hover over links to preview the destination domain. In this campaign, links jumped across multiple trusted-looking Google domains before landing on a fake login page. If the final destination does not match the service asking you to sign in, close the page immediately.
3) Treat file access and permission emails with caution
Shared document alerts are a favorite lure because they feel routine at work. If an email claims you were granted access to a file you do not recognize, do not click directly from the message. Instead, open your browser and sign in to Google Drive or OneDrive manually to check for new files.
The final step led users to a fake Microsoft login page, where entered credentials were silently stolen. (Stack Social)
4) Use a password manager to catch fake login pages
Password managers can be a strong last line of defense. They will not autofill credentials on fake Microsoft or Google login pages hosted on non-official domains. If your password manager refuses to fill in a login, that is a red flag worth paying attention to.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
NEW GOOGLE AI MAKES ROBOTS SMARTER WITHOUT THE CLOUD
5) Run strong antivirus software with phishing protection
Modern antivirus tools do more than scan files. Many now detect malicious links, fake CAPTCHA pages, and credential harvesting sites in real time. Strong antivirus software can block phishing pages even after a click, which matters in multi-stage attacks like this one.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
6) Reduce your exposure with a data removal service
Phishing campaigns often succeed because attackers already know your email, employer or role. That information is commonly pulled from data broker sites. A data removal service helps remove your personal information from these databases, making it harder for attackers to craft convincing, targeted emails.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
7) Enable two-factor authentication (2FA) everywhere
Even if attackers steal your password, two-factor authentication (2FA) can stop them from accessing your account. Use app-based authentication or hardware keys when possible, especially for work email, cloud storage, and Microsoft accounts.
8) Report suspicious emails immediately
If something feels off, report it. Flag suspicious Google or Microsoft alerts to your IT or security team so they can warn others. Early reporting can stop a phishing campaign before it spreads further inside an organization.
Google phishing emails looked like routine workplace alerts. (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaways
This campaign highlights a growing shift in phishing tactics. Attackers no longer need to fake brands when they can abuse trusted cloud services directly. As automation becomes more common, security awareness matters more than ever. Even familiar emails deserve a second look, especially when they push urgency or ask for credentials.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
If a phishing email comes from a real Google address, how confident are you that you would spot it before clicking? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Most dubious uses of AI at CES 2026
But those are just the beginning. We’ve seen AI pop up in much stranger places too, from hair clippers to stick vacs, and at least one case where even the manufacturer itself seemed unsure what made its products “AI.”
Here are the gadgets we’ve seen at CES 2026 so far that really take the “intelligence” out of “artificial intelligence.”
Glyde smart hair clippers
This is a product that would be silly enough without the AI add-on. These smart hair clippers help amateur hairdressers deliver the perfect fade by dynamically altering the closeness of the cut, helped along by an ominous face mask that looks like it belongs in an optician’s office.
But it’s taken to the next level by the real-time AI coach, which gives you feedback as you cut. Glyde told me it’s working on voice controls for the AI too, and that eventually it will be able to recommend specific hairstyles, so long as you’re willing to trust its style advice. Are you?

“Where Pills meet AI.”
That was the message emblazoned across the SleepQ booth, where company reps were handing out boxes of pills — a multivitamin with ashwagandha extract according to the box, supposedly good for sleep, though I wasn’t brave enough to test that claim on my jetlag.
Manufacturer Welt, originally spun out of a Samsung incubator, calls its product “AI-upgraded pharmacotherapy.” It’s really just using biometric data from your smartwatch or sleep tracker to tell you the optimal time to take a sleeping pill each day, with plans to eventually cover anxiety meds, weight-management drugs, pain relief, and more.
There may well be an argument that fine-tuning the time people pop their pills could make them more effective, but I feel safe in saying we don’t need to start throwing around the term “AI-enhanced drugs.”

Startup Deglace claims that its almost unnecessarily sleek-looking Fraction vacuum cleaner uses AI in two different ways: first to “optimize suction,” and then to manage repairs and replacements for the modular design.
It says its Neural Predictive AI monitors vacuum performance “to detect issues before they happen,” giving you health scores for each of the vacuum’s components, which can be conveniently replaced with a quick parts order from within the accompanying app. A cynic might worry this is all in the name of selling users expensive and proprietary replacement parts, but I can at least get behind the promise of modular upgrades — assuming Deglace is able to deliver on that promise.

Most digital picture frames let you display photos of loved ones, old holiday snaps, or your favorite pieces of art. Fraimic lets you display AI slop.
It’s an E Ink picture frame with a microphone and voice controls, so you can describe whatever picture you’d like, which the frame will then generate using OpenAI’s GPT Image 1.5 model. The frame itself starts at $399, which gets you 100 image generations each year, with the option to buy more if you run out.
What makes the AI in Fraimic so dubious is that it might be a pretty great product without it. The E Ink panel looks great, you can use it to show off your own pictures and photos too, and it uses so little power that it can run for years without being plugged in. We’d just love it a lot more without the added slop.

Infinix, a smaller phone manufacturer that’s had success across Asia for its affordable phones, didn’t launch any actual new products at CES this year, but it did bring five concepts that could fit into future phones. Some are clever, like various color-changing rear finishes and a couple of liquid-cooling designs. And then there’s the AI ModuVerse.
Modular phone concepts are nothing new, so the AI hook is what makes ModuVerse unique — in theory. One of the “Modus” makes sense: a meeting attachment that connects magnetically, generating AI transcripts and live translation onto a mini display on the back.
But when I asked what made everything else AI, Infinix didn’t really have any good answers. The gimbal camera has AI stabilization, the vlogging lens uses AI to detect faces, and the microphone has AI voice isolation — all technically AI-based, but not in any way that’s interesting. As for the magnetic, stackable power banks, Infinix’s reps eventually admitted they don’t really have any AI at all. Color me shocked.

There’s a growing trend for AI and robotic cooking hardware — The Verge’s Jen Tuohy reviewed a $1,500 robot chef just last month — but Wan AIChef is something altogether less impressive: an AI-enabled microwave.
It runs on what looks suspiciously like Android, with recipe suggestions, cooking instructions, and a camera inside so you can see the progress of what you’re making. But… it’s just a microwave. So it can’t actually do any cooking for you, other than warm up your food to just the right temperature (well, just right plus or minus 3 degrees Celsius, to be accurate).
It’ll do meal plans and food tracking and calorie counting too, which all sounds great so long as you’re willing to commit to eating all of your meals out of the AI microwave. Please, I beg you, do not eat all of your meals out of the AI microwave.

The tech industry absolutely loves reinventing the vending machine and branding it either robotics or AI, and AI Barmen is no different.
This setup — apparently already in use for private parties and corporate events — is really just an automatic cocktail machine with a few AI smarts slapped on top.
The AI uses the connected webcam to estimate your age — it was off by eight years in my case — and confirm you’re sober enough to get another drink. It can also create custom drinks, with mixed success: When asked for something to “fuck me up,” it came up with the Funky Tequila Fizz, aka tequila, triple sec, and soda. What, no absinthe?

Photo: Dominic Preston / The Verge
Should you buy your kid an AI toy that gives it a complete LLM-powered chatbot to speak to? Probably not. But what if that AI chatbot looked like chibi Elon Musk?
He’s just one of the many avatars offered by the Luka AI Cube, including Hayao Miyazaki, Steve from Minecraft, and Harry Potter. Kids can chat to them about their day, ask for advice, or even share the AI Cube’s camera feed to show the AI avatars where they are and what they’re up to. Luka says it’s a tool for fun, but also learning, with various educational activities and language options.
The elephant in the room is whether you should trust any company’s guardrails enough to give a young kid access to an LLM. Leading with an AI take on Elon Musk — whose own AI, Grok, is busy undressing children as we speak — doesn’t exactly inspire confidence.
-
Detroit, MI5 days ago2 hospitalized after shooting on Lodge Freeway in Detroit
-
Technology3 days agoPower bank feature creep is out of control
-
Dallas, TX4 days agoDefensive coordinator candidates who could improve Cowboys’ brutal secondary in 2026
-
Health5 days agoViral New Year reset routine is helping people adopt healthier habits
-
Nebraska2 days agoOregon State LB transfer Dexter Foster commits to Nebraska
-
Iowa3 days agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star
-
Nebraska3 days agoNebraska-based pizza chain Godfather’s Pizza is set to open a new location in Queen Creek
-
Entertainment2 days agoSpotify digs in on podcasts with new Hollywood studios