You can’t shake a stick without hitting an AI gadget at CES this year, with artificial smarts now embedded in just about every wearable, screen, and appliance across the show floor, not to mention the armies of AI companions, toys, and robots.
Technology
Meet the humanoid robot that learns from natural language, mimics human emotions
Imagine what it would be like to have a robot friend that can do things like take selfies, toss a ball, eat popcorn and play air guitar?
Well, you might not have to wait too long.
Researchers at the University of Tokyo have created a robot that can do all that and more, thanks to the power of GPT-4, the latest and most advanced large language model (LLM) in the world.
CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO’S TO MAKE YOU SMARTER
A researcher gives Alter3, a humanoid robot, verbal instructions. (University of Tokyo)
What is the Alter3 humanoid robot, how does it work?
Alter3 is a humanoid robot that was first introduced in 2016 as a platform for exploring the concept of life in artificial systems. It has a realistic appearance and can move its upper body, head and facial muscles with 43 axes controlled by air actuators. It also has a camera in each eye that allows it to see and interact with humans and the environment.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Alter3 interacts with a human. (University of Tokyo)
But what makes Alter3 really special is that it can now use GPT-4, a deep learning model that can generate natural language texts from any given prompt, to control its movements and behaviors. This means that instead of having to program every single action for the robot, the researchers can simply give it verbal instructions and let GPT-4 generate the corresponding Python code that runs the Android engine.
CALIFORNIA LEGISLATIVE SESSION TO BE DOMINATED BY AI REGULATIONS AND STATE’S STRUGGLING BUDGET
For example, to make Alter3 take a selfie, the researchers can say something like:
“Create a big, joyful smile and widen your eyes to show excitement. Swiftly turn the upper body slightly to the left, adopting a dynamic posture. Raise the right hand high, simulating a phone. Flex the right elbow, bringing the phone closer to the face. Tilt the head slightly to the right, giving a playful vibe.”
And GPT-4 will produce the code that makes Alter3 do exactly that.
Alter3 mimics taking a selfie. (University of Tokyo)
MORE: HUMANOID ROBOTS ARE NOW DOING THE WORK OF HUMANS IN A SPANX WAREHOUSE
What can the Alter3 humanoid robot do with GPT-4?
The researchers have tested Alter3 with GPT-4 in various scenarios, such as tossing a ball, eating popcorn, and playing air guitar. They have also experimented with different types of feedback, such as linguistic, visual, and emotional, to improve the robot’s performance and adaptability.
Alter3 mimics playing a guitar. (University of Tokyo)
One of the most interesting aspects of Alter3’s behavior is that it can learn from its own memory and from human responses. For instance, if the robot does something that makes a human laugh or smile, it will remember that and try to repeat it in the future. This is similar to how newborn babies imitate their parents’ expressions and gestures.
Alter3 mimics jogging. (University of Tokyo)
MORE: THE NEXT GENERATION OF TESLA’S HUMANOID ROBOT MAKES ITS DEBUT
The researchers have also added some humor and personality to Alter3’s actions. In one case, the robot pretends to eat a bag of popcorn, only to realize that it belongs to the person sitting next to it. It then shows a surprised and embarrassed expression and apologizes with its arms.
Alter3, the humanoid robot (University of Tokyo)
Why is this humanoid robot AI important and what are the implications?
The research team behind Alter3 believes that this is a breakthrough in the field of robotics and artificial intelligence, as it shows how large language models can be used to bridge the gap between natural language and robot control. This opens up new possibilities for human-robot collaboration and communication, as well as for creating more intelligent, adaptable, and personable robotic entities.
Alter3 mimics seeing a pretend snake. (University of Tokyo)
MORE: HOW THIS ROBOT HELPS YOU PROTECT AND CONNECT YOUR HOME
The paper, titled “From Text to Motion: Grounding GPT-4 in a Humanoid Robot ‘Alter3,’” was written by Takahide Yoshida, Atsushi Masumori and Takashi Ikegami and is available on the preprint server arXiv. The authors hope that their work will inspire more research and development in this direction and that one day we might be able to have robot friends that can understand us and share our interests and emotions.
Kurt’s key takeaways
Alter3 is an example of how natural language processing and robotics can work together to create pretty incredible interactions. By using GPT-4, the robot can perform a variety of tasks and behaviors based on verbal commands, without requiring extensive programming or manual control. This also allows the robot to learn from its own experience and from human feedback and to express some humor and personality. Alter3 demonstrates the potential of large language models to improve the field of robotics and artificial intelligence as well as bring us closer to having robot friends that can relate to us and entertain us.
What do you think of Alter3 and its abilities? Would you like to have a robot like that in your life? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover.
Answers to the most asked CyberGuy questions:
Ideas for using those Holiday Gift cards:
Copyright 2024 CyberGuy.com. All rights reserved.
Technology
Xbox’s Towerborne is switching from a free-to-play game to a paid one
Towerborne, a side-scrolling action RPG published by Xbox Game Studios that has been available in early access, will officially launch on February 26th. But instead of launching as a free-to-play, always-on online game as originally planned, Towerborne is instead going to be a paid game that you can play offline.
“You will own the complete experience permanently, with offline play and online co-op,” Trisha Stouffer, CEO and president of Towerborne developer Stoic, says in an Xbox Wire blog post. “This change required deep structural rebuilding over the past year, transforming systems originally designed around constant connectivity. The result is a stronger, more accessible, and more player-friendly version of Towerborne — one we’re incredibly proud to bring to launch.”
“After listening to our community during Early Access and Game Preview, we learned players wanted a complete, polished experience without ongoing monetization mechanics,” according to an FAQ. “Moving to a premium model lets us deliver the full game upfront—no live-service grind, no pay-to-win systems—just the best version of Towerborne.”
With the popular live service games like Fortnite and Roblox getting harder to usurp, Towerborne’s switch to a premium, offline-playable experience could make it more enticing for players who don’t want to jump into another time-sucking forever game. It makes Towerborne more appealing to me, at least.
With the 1.0 release of the game, Towerborne will have a “complete” story, new bosses, and a “reworked” difficulty system. You’ll also be able to acquire all in-game cosmetics for free through gameplay, with “no more cosmetic purchasing.” Players who are already part of early access will still be able to play the game.
Towerborne will launch on February 26th on Xbox Series X / S, Xbox on PC, Game Pass, Steam, and PS5. The standard edition will cost $24.99, while the deluxe edition will cost $29.99.
Technology
Hackers abuse Google Cloud to send trusted phishing emails
NEWYou can now listen to Fox News articles!
Cybercriminals have found a clever new way to get phishing emails straight into inboxes.
Instead of spoofing brands, they are abusing real cloud tools that people already trust. Security researchers say attackers recently hijacked a legitimate email feature inside Google Cloud.
The result was thousands of phishing messages that looked and felt like normal Google notifications. Many slipped past spam filters with ease.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – when you join my CYBERGUY.COM newsletter.
How this Google Cloud phishing attack worked
At the center of the campaign was Google Cloud Application Integration. This service allows businesses to send automated email notifications from workflows they build. Attackers exploited the Send Email task inside that system. Because the messages came from a real Google address, they appeared authentic to both users and security tools.
According to Check Point, a global cybersecurity firm that tracks and analyzes large-scale threat campaigns, the emails were sent from a legitimate Google-owned address and closely matched Google’s notification style. Fonts, wording, and layout all looked familiar. Over a two-week period in December 2025, attackers sent more than 9,000 phishing emails targeting roughly 3,200 organizations across the U.S., Europe, Canada, Asia Pacific, and Latin America.
Attackers used trusted Google Cloud infrastructure to route victims through multiple redirects before revealing the scam. (Thomas Fuller/SOPA Images/LightRocket via Getty Images)
MALICIOUS CHROME EXTENSIONS CAUGHT STEALING SENSITIVE DATA
Why Google phishing emails were so convincing
The messages looked like routine workplace alerts. Some claimed you had received a voicemail. Others said you were granted access to a shared document, like a Q4 file. That sense of normalcy lowered suspicion. Many people are used to seeing these exact messages every day. Even more concerning, the emails bypassed common protections like SPF and DMARC because they were sent through Google-owned infrastructure. To email systems, nothing looked fake.
What happens after you click
The attack did not stop at the email. Once a victim clicked the link, they were sent to a page hosted on storage.cloud.google.com. That added another layer of trust. From there, the link redirected again to googleusercontent.com. Next came a fake CAPTCHA or image check. This step blocked automated security scanners while letting real users continue. After passing that screen, victims landed on a fake Microsoft login page hosted on a non-Microsoft domain. Any credentials entered there were captured by the attackers.
Who was targeted in the Google Cloud phishing attack
Check Point says the campaign focused heavily on industries that rely on automated alerts and shared documents. That included manufacturing, technology, finance, professional services, and retail. Other sectors like healthcare, education, government, energy, travel and media were also targeted. These environments see constant permission requests and file-sharing notices, which made the lures feel routine.
“We have blocked several phishing campaigns involving the misuse of an email notification feature within Google Cloud Application Integration,” a Google spokesperson told Cyberguy. “Importantly, this activity stemmed from the abuse of a workflow automation tool, not a compromise of Google’s infrastructure. While we have implemented protections to defend users against this specific attack, we encourage continued caution as malicious actors frequently attempt to spoof trusted brands. We are taking additional steps to prevent further misuse.”
The incident demonstrates how attackers can weaponize legitimate cloud automation tools without resorting to traditional spoofing.
Ways to stay safe from trusted-looking phishing emails
Phishing emails are getting harder to spot, especially when attackers abuse real cloud platforms like Google Cloud. These steps help reduce risk when emails look familiar and legitimate.
1) Slow down before acting on alerts
Attackers rely on urgency. Messages about voicemails, shared files or permission changes are designed to make you click fast. Pause before taking action. Ask yourself whether you were actually expecting that alert. If not, verify it another way.
2) Inspect links before you click
Always hover over links to preview the destination domain. In this campaign, links jumped across multiple trusted-looking Google domains before landing on a fake login page. If the final destination does not match the service asking you to sign in, close the page immediately.
3) Treat file access and permission emails with caution
Shared document alerts are a favorite lure because they feel routine at work. If an email claims you were granted access to a file you do not recognize, do not click directly from the message. Instead, open your browser and sign in to Google Drive or OneDrive manually to check for new files.
The final step led users to a fake Microsoft login page, where entered credentials were silently stolen. (Stack Social)
4) Use a password manager to catch fake login pages
Password managers can be a strong last line of defense. They will not autofill credentials on fake Microsoft or Google login pages hosted on non-official domains. If your password manager refuses to fill in a login, that is a red flag worth paying attention to.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
NEW GOOGLE AI MAKES ROBOTS SMARTER WITHOUT THE CLOUD
5) Run strong antivirus software with phishing protection
Modern antivirus tools do more than scan files. Many now detect malicious links, fake CAPTCHA pages, and credential harvesting sites in real time. Strong antivirus software can block phishing pages even after a click, which matters in multi-stage attacks like this one.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
6) Reduce your exposure with a data removal service
Phishing campaigns often succeed because attackers already know your email, employer or role. That information is commonly pulled from data broker sites. A data removal service helps remove your personal information from these databases, making it harder for attackers to craft convincing, targeted emails.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
7) Enable two-factor authentication (2FA) everywhere
Even if attackers steal your password, two-factor authentication (2FA) can stop them from accessing your account. Use app-based authentication or hardware keys when possible, especially for work email, cloud storage, and Microsoft accounts.
8) Report suspicious emails immediately
If something feels off, report it. Flag suspicious Google or Microsoft alerts to your IT or security team so they can warn others. Early reporting can stop a phishing campaign before it spreads further inside an organization.
Google phishing emails looked like routine workplace alerts. (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaways
This campaign highlights a growing shift in phishing tactics. Attackers no longer need to fake brands when they can abuse trusted cloud services directly. As automation becomes more common, security awareness matters more than ever. Even familiar emails deserve a second look, especially when they push urgency or ask for credentials.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
If a phishing email comes from a real Google address, how confident are you that you would spot it before clicking? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Most dubious uses of AI at CES 2026
But those are just the beginning. We’ve seen AI pop up in much stranger places too, from hair clippers to stick vacs, and at least one case where even the manufacturer itself seemed unsure what made its products “AI.”
Here are the gadgets we’ve seen at CES 2026 so far that really take the “intelligence” out of “artificial intelligence.”
Glyde smart hair clippers
This is a product that would be silly enough without the AI add-on. These smart hair clippers help amateur hairdressers deliver the perfect fade by dynamically altering the closeness of the cut, helped along by an ominous face mask that looks like it belongs in an optician’s office.
But it’s taken to the next level by the real-time AI coach, which gives you feedback as you cut. Glyde told me it’s working on voice controls for the AI too, and that eventually it will be able to recommend specific hairstyles, so long as you’re willing to trust its style advice. Are you?

“Where Pills meet AI.”
That was the message emblazoned across the SleepQ booth, where company reps were handing out boxes of pills — a multivitamin with ashwagandha extract according to the box, supposedly good for sleep, though I wasn’t brave enough to test that claim on my jetlag.
Manufacturer Welt, originally spun out of a Samsung incubator, calls its product “AI-upgraded pharmacotherapy.” It’s really just using biometric data from your smartwatch or sleep tracker to tell you the optimal time to take a sleeping pill each day, with plans to eventually cover anxiety meds, weight-management drugs, pain relief, and more.
There may well be an argument that fine-tuning the time people pop their pills could make them more effective, but I feel safe in saying we don’t need to start throwing around the term “AI-enhanced drugs.”

Startup Deglace claims that its almost unnecessarily sleek-looking Fraction vacuum cleaner uses AI in two different ways: first to “optimize suction,” and then to manage repairs and replacements for the modular design.
It says its Neural Predictive AI monitors vacuum performance “to detect issues before they happen,” giving you health scores for each of the vacuum’s components, which can be conveniently replaced with a quick parts order from within the accompanying app. A cynic might worry this is all in the name of selling users expensive and proprietary replacement parts, but I can at least get behind the promise of modular upgrades — assuming Deglace is able to deliver on that promise.

Most digital picture frames let you display photos of loved ones, old holiday snaps, or your favorite pieces of art. Fraimic lets you display AI slop.
It’s an E Ink picture frame with a microphone and voice controls, so you can describe whatever picture you’d like, which the frame will then generate using OpenAI’s GPT Image 1.5 model. The frame itself starts at $399, which gets you 100 image generations each year, with the option to buy more if you run out.
What makes the AI in Fraimic so dubious is that it might be a pretty great product without it. The E Ink panel looks great, you can use it to show off your own pictures and photos too, and it uses so little power that it can run for years without being plugged in. We’d just love it a lot more without the added slop.

Infinix, a smaller phone manufacturer that’s had success across Asia for its affordable phones, didn’t launch any actual new products at CES this year, but it did bring five concepts that could fit into future phones. Some are clever, like various color-changing rear finishes and a couple of liquid-cooling designs. And then there’s the AI ModuVerse.
Modular phone concepts are nothing new, so the AI hook is what makes ModuVerse unique — in theory. One of the “Modus” makes sense: a meeting attachment that connects magnetically, generating AI transcripts and live translation onto a mini display on the back.
But when I asked what made everything else AI, Infinix didn’t really have any good answers. The gimbal camera has AI stabilization, the vlogging lens uses AI to detect faces, and the microphone has AI voice isolation — all technically AI-based, but not in any way that’s interesting. As for the magnetic, stackable power banks, Infinix’s reps eventually admitted they don’t really have any AI at all. Color me shocked.

There’s a growing trend for AI and robotic cooking hardware — The Verge’s Jen Tuohy reviewed a $1,500 robot chef just last month — but Wan AIChef is something altogether less impressive: an AI-enabled microwave.
It runs on what looks suspiciously like Android, with recipe suggestions, cooking instructions, and a camera inside so you can see the progress of what you’re making. But… it’s just a microwave. So it can’t actually do any cooking for you, other than warm up your food to just the right temperature (well, just right plus or minus 3 degrees Celsius, to be accurate).
It’ll do meal plans and food tracking and calorie counting too, which all sounds great so long as you’re willing to commit to eating all of your meals out of the AI microwave. Please, I beg you, do not eat all of your meals out of the AI microwave.

The tech industry absolutely loves reinventing the vending machine and branding it either robotics or AI, and AI Barmen is no different.
This setup — apparently already in use for private parties and corporate events — is really just an automatic cocktail machine with a few AI smarts slapped on top.
The AI uses the connected webcam to estimate your age — it was off by eight years in my case — and confirm you’re sober enough to get another drink. It can also create custom drinks, with mixed success: When asked for something to “fuck me up,” it came up with the Funky Tequila Fizz, aka tequila, triple sec, and soda. What, no absinthe?

Photo: Dominic Preston / The Verge
Should you buy your kid an AI toy that gives it a complete LLM-powered chatbot to speak to? Probably not. But what if that AI chatbot looked like chibi Elon Musk?
He’s just one of the many avatars offered by the Luka AI Cube, including Hayao Miyazaki, Steve from Minecraft, and Harry Potter. Kids can chat to them about their day, ask for advice, or even share the AI Cube’s camera feed to show the AI avatars where they are and what they’re up to. Luka says it’s a tool for fun, but also learning, with various educational activities and language options.
The elephant in the room is whether you should trust any company’s guardrails enough to give a young kid access to an LLM. Leading with an AI take on Elon Musk — whose own AI, Grok, is busy undressing children as we speak — doesn’t exactly inspire confidence.
-
Detroit, MI5 days ago2 hospitalized after shooting on Lodge Freeway in Detroit
-
Technology3 days agoPower bank feature creep is out of control
-
Dallas, TX4 days agoDefensive coordinator candidates who could improve Cowboys’ brutal secondary in 2026
-
Health5 days agoViral New Year reset routine is helping people adopt healthier habits
-
Nebraska2 days agoOregon State LB transfer Dexter Foster commits to Nebraska
-
Iowa2 days agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star
-
Nebraska2 days agoNebraska-based pizza chain Godfather’s Pizza is set to open a new location in Queen Creek
-
Entertainment2 days agoSpotify digs in on podcasts with new Hollywood studios