You can’t shake a stick without hitting an AI gadget at CES this year, with artificial smarts now embedded in just about every wearable, screen, and appliance across the show floor, not to mention the armies of AI companions, toys, and robots.
Technology
Company restores AI teddy bear sales after safety scare
NEWYou can now listen to Fox News articles!
FoloToy paused sales of its AI teddy bear Kumma after a safety group found the toy gave risky and inappropriate responses during testing. Now the company says it has restored sales after a week of intense review. It also claims that it improved safeguards to keep kids safe.
The announcement arrived through a social media post that highlighted a push for stronger oversight. The company said it completed testing, reinforced safety modules, and upgraded its content filters. It added that it aims to build age-appropriate AI companions for families worldwide.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
TEXAS FAMILY SUES CHARACTER.AI AFTER CHATBOT ALLEGEDLY ENCOURAGED AUTISTIC SON TO HARM PARENTS AND HIMSELF
FoloToy resumed sales of its AI teddy bear Kumma after a weeklong review prompted by safety concerns. (Kurt “CyberGuy” Knuttson)
Why FoloToy’s AI teddy bear raised safety concerns
The controversy started when the Public Interest Research Group Education Fund tested three different AI toys. All of them produced concerning answers that touched on religion, Norse mythology, and harmful household items.
Kumma stood out for the wrong reasons. When the bear used the Mistral model, it offered tips on where to find knives, pills, and matches. It even outlined steps to light a match and blow it out.
Tests with the GPT-4o model raised even sharper concerns. Kumma gave advice related to kissing and launched into detailed explanations of adult sexual content when prompted. The bear pushed further by asking the young user what they wanted to explore.
Researchers called the behavior unsafe and inappropriate for any child-focused product.
FoloToy paused access to its AI toys
Once the findings became public, FoloToy suspended sales of Kumma and its other AI toys. The company told PIRG that it started a full safety audit across all products.
OpenAI also confirmed that it suspended FoloToy’s access to its models for violating policies designed to protect anyone under 18.
LAWMAKERS UNVEIL BIPARTISAN GUARD ACT AFTER PARENTS BLAME AI CHATBOTS FOR TEEN SUICIDES, VIOLENCE
The company says new safeguards and upgraded filters are now in place to prevent inappropriate responses. (Kurt “CyberGuy” Knutsson)
Why FoloToy restored Kumma’s sales after its safety review
FoloToy brought Kumma back to its online store just one week after suspending sales. The fast return drew attention from parents and safety experts who wondered if the company had enough time to fix the serious issues identified in PIRG’s report.
FoloToy posted a detailed statement on X that laid out its version of what happened. In the post, the company said it viewed child safety as its “highest priority” and that it was “the only company to proactively suspend sales, not only of the product mentioned in the report, but also of our other AI toys.“ FoloToy said it took this action “immediately after the findings were published because we believe responsible action must come before commercial considerations.”
The company also emphasized to CyberGuy that it was the only one of the three AI toy startups in the PIRG review to suspend sales across all of its products and that it made this decision during the peak Christmas sales season, knowing the commercial impact would be significant. FoloToy told us, “Nevertheless, we moved forward decisively, because we believe that responsible action must always come before commercial interests.”
The company also said it took the report’s disturbing examples seriously. According to FoloToy, the issues were “directly addressed in our internal review.” It explained that the team “initiated a deep, company-wide internal safety audit,” then “strengthened and upgraded our content-moderation and child-safety safeguards,” and “deployed enhanced safety rules and protections through our cloud-based system.”
After outlining these steps, the company said it spent the week on “rigorous review, testing, and reinforcement of our safety modules.” It concluded its announcement by saying it “began gradually restoring product sales” as those updated safeguards went live.
FoloToy added that as global attention on AI toy risks grows, “transparency, responsibility and continuous improvement are essential,” and that the company “remains firmly committed to building safe, age-appropriate AI companions for children and families worldwide.”
LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION
Safety testers previously found the toy giving risky guidance about weapons, matches and adult content.
Why experts still question FoloToy’s AI toy safety fixes
PIRG researcher RJ Cross said her team plans to test the updated toys to see if the fixes hold up. She noted that a week feels fast for such significant changes, and only new tests will show if the product now behaves safely.
Parents will want to follow this closely as AI-powered toys grow more common. The speed of FoloToy’s relaunch raises questions about the depth of its review.
Tips for parents before buying AI toys
AI toys can feel exciting and helpful, but they can also surprise you with content you’d never expect. If you plan to bring an AI-powered toy into your home, these simple steps can help you stay in control.
1) Check which AI model the toy uses
Not every model follows the same guardrails. Some include stronger filters while others may respond too freely. Look for transparent disclosures about which model powers the toy and what safety features support it.
2) Read independent reviews
Groups like PIRG often test toys in ways parents cannot. These reviews flag hidden risks and point out behavior you may not catch during quick demos.
3) Set clear usage rules
Keep AI toys in shared spaces where you can hear or see how your child interacts with it. This helps you step in if the toy gives a concerning answer.
4) Test the toy yourself first
Ask the toy questions, try creative prompts, and see how it handles tricky topics. This lets you learn how it behaves before you hand it to your child.
5) Update the toy’s firmware
Many AI toys run on cloud systems. Updates often add stronger safeguards or reduce risky answers. Make sure the device stays current.
6) Check for a clear privacy policy
AI toys can gather voice data, location info, or behavioral patterns. A strong privacy policy should explain what is collected, how long it is stored, and who can access it.
7) Watch for sudden behavior changes
If an AI toy starts giving odd answers or pushes into areas that feel inappropriate, stop using it and report the problem to the manufacturer.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
AI toys can offer fun and learning, but they can also expose kids to unexpected risks. FoloToy says it improved Kumma’s safety, yet experts still want proof. Until the updated toy goes through independent testing, families may want to stay cautious.
Do you think AI toys can ever be fully safe for young kids? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Xbox’s Towerborne is switching from a free-to-play game to a paid one
Towerborne, a side-scrolling action RPG published by Xbox Game Studios that has been available in early access, will officially launch on February 26th. But instead of launching as a free-to-play, always-on online game as originally planned, Towerborne is instead going to be a paid game that you can play offline.
“You will own the complete experience permanently, with offline play and online co-op,” Trisha Stouffer, CEO and president of Towerborne developer Stoic, says in an Xbox Wire blog post. “This change required deep structural rebuilding over the past year, transforming systems originally designed around constant connectivity. The result is a stronger, more accessible, and more player-friendly version of Towerborne — one we’re incredibly proud to bring to launch.”
“After listening to our community during Early Access and Game Preview, we learned players wanted a complete, polished experience without ongoing monetization mechanics,” according to an FAQ. “Moving to a premium model lets us deliver the full game upfront—no live-service grind, no pay-to-win systems—just the best version of Towerborne.”
With the popular live service games like Fortnite and Roblox getting harder to usurp, Towerborne’s switch to a premium, offline-playable experience could make it more enticing for players who don’t want to jump into another time-sucking forever game. It makes Towerborne more appealing to me, at least.
With the 1.0 release of the game, Towerborne will have a “complete” story, new bosses, and a “reworked” difficulty system. You’ll also be able to acquire all in-game cosmetics for free through gameplay, with “no more cosmetic purchasing.” Players who are already part of early access will still be able to play the game.
Towerborne will launch on February 26th on Xbox Series X / S, Xbox on PC, Game Pass, Steam, and PS5. The standard edition will cost $24.99, while the deluxe edition will cost $29.99.
Technology
Hackers abuse Google Cloud to send trusted phishing emails
NEWYou can now listen to Fox News articles!
Cybercriminals have found a clever new way to get phishing emails straight into inboxes.
Instead of spoofing brands, they are abusing real cloud tools that people already trust. Security researchers say attackers recently hijacked a legitimate email feature inside Google Cloud.
The result was thousands of phishing messages that looked and felt like normal Google notifications. Many slipped past spam filters with ease.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – when you join my CYBERGUY.COM newsletter.
How this Google Cloud phishing attack worked
At the center of the campaign was Google Cloud Application Integration. This service allows businesses to send automated email notifications from workflows they build. Attackers exploited the Send Email task inside that system. Because the messages came from a real Google address, they appeared authentic to both users and security tools.
According to Check Point, a global cybersecurity firm that tracks and analyzes large-scale threat campaigns, the emails were sent from a legitimate Google-owned address and closely matched Google’s notification style. Fonts, wording, and layout all looked familiar. Over a two-week period in December 2025, attackers sent more than 9,000 phishing emails targeting roughly 3,200 organizations across the U.S., Europe, Canada, Asia Pacific, and Latin America.
Attackers used trusted Google Cloud infrastructure to route victims through multiple redirects before revealing the scam. (Thomas Fuller/SOPA Images/LightRocket via Getty Images)
MALICIOUS CHROME EXTENSIONS CAUGHT STEALING SENSITIVE DATA
Why Google phishing emails were so convincing
The messages looked like routine workplace alerts. Some claimed you had received a voicemail. Others said you were granted access to a shared document, like a Q4 file. That sense of normalcy lowered suspicion. Many people are used to seeing these exact messages every day. Even more concerning, the emails bypassed common protections like SPF and DMARC because they were sent through Google-owned infrastructure. To email systems, nothing looked fake.
What happens after you click
The attack did not stop at the email. Once a victim clicked the link, they were sent to a page hosted on storage.cloud.google.com. That added another layer of trust. From there, the link redirected again to googleusercontent.com. Next came a fake CAPTCHA or image check. This step blocked automated security scanners while letting real users continue. After passing that screen, victims landed on a fake Microsoft login page hosted on a non-Microsoft domain. Any credentials entered there were captured by the attackers.
Who was targeted in the Google Cloud phishing attack
Check Point says the campaign focused heavily on industries that rely on automated alerts and shared documents. That included manufacturing, technology, finance, professional services, and retail. Other sectors like healthcare, education, government, energy, travel and media were also targeted. These environments see constant permission requests and file-sharing notices, which made the lures feel routine.
“We have blocked several phishing campaigns involving the misuse of an email notification feature within Google Cloud Application Integration,” a Google spokesperson told Cyberguy. “Importantly, this activity stemmed from the abuse of a workflow automation tool, not a compromise of Google’s infrastructure. While we have implemented protections to defend users against this specific attack, we encourage continued caution as malicious actors frequently attempt to spoof trusted brands. We are taking additional steps to prevent further misuse.”
The incident demonstrates how attackers can weaponize legitimate cloud automation tools without resorting to traditional spoofing.
Ways to stay safe from trusted-looking phishing emails
Phishing emails are getting harder to spot, especially when attackers abuse real cloud platforms like Google Cloud. These steps help reduce risk when emails look familiar and legitimate.
1) Slow down before acting on alerts
Attackers rely on urgency. Messages about voicemails, shared files or permission changes are designed to make you click fast. Pause before taking action. Ask yourself whether you were actually expecting that alert. If not, verify it another way.
2) Inspect links before you click
Always hover over links to preview the destination domain. In this campaign, links jumped across multiple trusted-looking Google domains before landing on a fake login page. If the final destination does not match the service asking you to sign in, close the page immediately.
3) Treat file access and permission emails with caution
Shared document alerts are a favorite lure because they feel routine at work. If an email claims you were granted access to a file you do not recognize, do not click directly from the message. Instead, open your browser and sign in to Google Drive or OneDrive manually to check for new files.
The final step led users to a fake Microsoft login page, where entered credentials were silently stolen. (Stack Social)
4) Use a password manager to catch fake login pages
Password managers can be a strong last line of defense. They will not autofill credentials on fake Microsoft or Google login pages hosted on non-official domains. If your password manager refuses to fill in a login, that is a red flag worth paying attention to.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
NEW GOOGLE AI MAKES ROBOTS SMARTER WITHOUT THE CLOUD
5) Run strong antivirus software with phishing protection
Modern antivirus tools do more than scan files. Many now detect malicious links, fake CAPTCHA pages, and credential harvesting sites in real time. Strong antivirus software can block phishing pages even after a click, which matters in multi-stage attacks like this one.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
6) Reduce your exposure with a data removal service
Phishing campaigns often succeed because attackers already know your email, employer or role. That information is commonly pulled from data broker sites. A data removal service helps remove your personal information from these databases, making it harder for attackers to craft convincing, targeted emails.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
7) Enable two-factor authentication (2FA) everywhere
Even if attackers steal your password, two-factor authentication (2FA) can stop them from accessing your account. Use app-based authentication or hardware keys when possible, especially for work email, cloud storage, and Microsoft accounts.
8) Report suspicious emails immediately
If something feels off, report it. Flag suspicious Google or Microsoft alerts to your IT or security team so they can warn others. Early reporting can stop a phishing campaign before it spreads further inside an organization.
Google phishing emails looked like routine workplace alerts. (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaways
This campaign highlights a growing shift in phishing tactics. Attackers no longer need to fake brands when they can abuse trusted cloud services directly. As automation becomes more common, security awareness matters more than ever. Even familiar emails deserve a second look, especially when they push urgency or ask for credentials.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
If a phishing email comes from a real Google address, how confident are you that you would spot it before clicking? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Most dubious uses of AI at CES 2026
But those are just the beginning. We’ve seen AI pop up in much stranger places too, from hair clippers to stick vacs, and at least one case where even the manufacturer itself seemed unsure what made its products “AI.”
Here are the gadgets we’ve seen at CES 2026 so far that really take the “intelligence” out of “artificial intelligence.”
Glyde smart hair clippers
This is a product that would be silly enough without the AI add-on. These smart hair clippers help amateur hairdressers deliver the perfect fade by dynamically altering the closeness of the cut, helped along by an ominous face mask that looks like it belongs in an optician’s office.
But it’s taken to the next level by the real-time AI coach, which gives you feedback as you cut. Glyde told me it’s working on voice controls for the AI too, and that eventually it will be able to recommend specific hairstyles, so long as you’re willing to trust its style advice. Are you?

“Where Pills meet AI.”
That was the message emblazoned across the SleepQ booth, where company reps were handing out boxes of pills — a multivitamin with ashwagandha extract according to the box, supposedly good for sleep, though I wasn’t brave enough to test that claim on my jetlag.
Manufacturer Welt, originally spun out of a Samsung incubator, calls its product “AI-upgraded pharmacotherapy.” It’s really just using biometric data from your smartwatch or sleep tracker to tell you the optimal time to take a sleeping pill each day, with plans to eventually cover anxiety meds, weight-management drugs, pain relief, and more.
There may well be an argument that fine-tuning the time people pop their pills could make them more effective, but I feel safe in saying we don’t need to start throwing around the term “AI-enhanced drugs.”

Startup Deglace claims that its almost unnecessarily sleek-looking Fraction vacuum cleaner uses AI in two different ways: first to “optimize suction,” and then to manage repairs and replacements for the modular design.
It says its Neural Predictive AI monitors vacuum performance “to detect issues before they happen,” giving you health scores for each of the vacuum’s components, which can be conveniently replaced with a quick parts order from within the accompanying app. A cynic might worry this is all in the name of selling users expensive and proprietary replacement parts, but I can at least get behind the promise of modular upgrades — assuming Deglace is able to deliver on that promise.

Most digital picture frames let you display photos of loved ones, old holiday snaps, or your favorite pieces of art. Fraimic lets you display AI slop.
It’s an E Ink picture frame with a microphone and voice controls, so you can describe whatever picture you’d like, which the frame will then generate using OpenAI’s GPT Image 1.5 model. The frame itself starts at $399, which gets you 100 image generations each year, with the option to buy more if you run out.
What makes the AI in Fraimic so dubious is that it might be a pretty great product without it. The E Ink panel looks great, you can use it to show off your own pictures and photos too, and it uses so little power that it can run for years without being plugged in. We’d just love it a lot more without the added slop.

Infinix, a smaller phone manufacturer that’s had success across Asia for its affordable phones, didn’t launch any actual new products at CES this year, but it did bring five concepts that could fit into future phones. Some are clever, like various color-changing rear finishes and a couple of liquid-cooling designs. And then there’s the AI ModuVerse.
Modular phone concepts are nothing new, so the AI hook is what makes ModuVerse unique — in theory. One of the “Modus” makes sense: a meeting attachment that connects magnetically, generating AI transcripts and live translation onto a mini display on the back.
But when I asked what made everything else AI, Infinix didn’t really have any good answers. The gimbal camera has AI stabilization, the vlogging lens uses AI to detect faces, and the microphone has AI voice isolation — all technically AI-based, but not in any way that’s interesting. As for the magnetic, stackable power banks, Infinix’s reps eventually admitted they don’t really have any AI at all. Color me shocked.

There’s a growing trend for AI and robotic cooking hardware — The Verge’s Jen Tuohy reviewed a $1,500 robot chef just last month — but Wan AIChef is something altogether less impressive: an AI-enabled microwave.
It runs on what looks suspiciously like Android, with recipe suggestions, cooking instructions, and a camera inside so you can see the progress of what you’re making. But… it’s just a microwave. So it can’t actually do any cooking for you, other than warm up your food to just the right temperature (well, just right plus or minus 3 degrees Celsius, to be accurate).
It’ll do meal plans and food tracking and calorie counting too, which all sounds great so long as you’re willing to commit to eating all of your meals out of the AI microwave. Please, I beg you, do not eat all of your meals out of the AI microwave.

The tech industry absolutely loves reinventing the vending machine and branding it either robotics or AI, and AI Barmen is no different.
This setup — apparently already in use for private parties and corporate events — is really just an automatic cocktail machine with a few AI smarts slapped on top.
The AI uses the connected webcam to estimate your age — it was off by eight years in my case — and confirm you’re sober enough to get another drink. It can also create custom drinks, with mixed success: When asked for something to “fuck me up,” it came up with the Funky Tequila Fizz, aka tequila, triple sec, and soda. What, no absinthe?

Photo: Dominic Preston / The Verge
Should you buy your kid an AI toy that gives it a complete LLM-powered chatbot to speak to? Probably not. But what if that AI chatbot looked like chibi Elon Musk?
He’s just one of the many avatars offered by the Luka AI Cube, including Hayao Miyazaki, Steve from Minecraft, and Harry Potter. Kids can chat to them about their day, ask for advice, or even share the AI Cube’s camera feed to show the AI avatars where they are and what they’re up to. Luka says it’s a tool for fun, but also learning, with various educational activities and language options.
The elephant in the room is whether you should trust any company’s guardrails enough to give a young kid access to an LLM. Leading with an AI take on Elon Musk — whose own AI, Grok, is busy undressing children as we speak — doesn’t exactly inspire confidence.
-
Detroit, MI5 days ago2 hospitalized after shooting on Lodge Freeway in Detroit
-
Technology3 days agoPower bank feature creep is out of control
-
Dallas, TX4 days agoDefensive coordinator candidates who could improve Cowboys’ brutal secondary in 2026
-
Health5 days agoViral New Year reset routine is helping people adopt healthier habits
-
Nebraska2 days agoOregon State LB transfer Dexter Foster commits to Nebraska
-
Iowa3 days agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star
-
Nebraska3 days agoNebraska-based pizza chain Godfather’s Pizza is set to open a new location in Queen Creek
-
Entertainment2 days agoSpotify digs in on podcasts with new Hollywood studios