You can’t shake a stick without hitting an AI gadget at CES this year, with artificial smarts now embedded in just about every wearable, screen, and appliance across the show floor, not to mention the armies of AI companions, toys, and robots.
Technology
Most dubious uses of AI at CES 2026
But those are just the beginning. We’ve seen AI pop up in much stranger places too, from hair clippers to stick vacs, and at least one case where even the manufacturer itself seemed unsure what made its products “AI.”
Here are the gadgets we’ve seen at CES 2026 so far that really take the “intelligence” out of “artificial intelligence.”
Glyde smart hair clippers
This is a product that would be silly enough without the AI add-on. These smart hair clippers help amateur hairdressers deliver the perfect fade by dynamically altering the closeness of the cut, helped along by an ominous face mask that looks like it belongs in an optician’s office.
But it’s taken to the next level by the real-time AI coach, which gives you feedback as you cut. Glyde told me it’s working on voice controls for the AI too, and that eventually it will be able to recommend specific hairstyles, so long as you’re willing to trust its style advice. Are you?

“Where Pills meet AI.”
That was the message emblazoned across the SleepQ booth, where company reps were handing out boxes of pills — a multivitamin with ashwagandha extract according to the box, supposedly good for sleep, though I wasn’t brave enough to test that claim on my jetlag.
Manufacturer Welt, originally spun out of a Samsung incubator, calls its product “AI-upgraded pharmacotherapy.” It’s really just using biometric data from your smartwatch or sleep tracker to tell you the optimal time to take a sleeping pill each day, with plans to eventually cover anxiety meds, weight-management drugs, pain relief, and more.
There may well be an argument that fine-tuning the time people pop their pills could make them more effective, but I feel safe in saying we don’t need to start throwing around the term “AI-enhanced drugs.”

Startup Deglace claims that its almost unnecessarily sleek-looking Fraction vacuum cleaner uses AI in two different ways: first to “optimize suction,” and then to manage repairs and replacements for the modular design.
It says its Neural Predictive AI monitors vacuum performance “to detect issues before they happen,” giving you health scores for each of the vacuum’s components, which can be conveniently replaced with a quick parts order from within the accompanying app. A cynic might worry this is all in the name of selling users expensive and proprietary replacement parts, but I can at least get behind the promise of modular upgrades — assuming Deglace is able to deliver on that promise.

Most digital picture frames let you display photos of loved ones, old holiday snaps, or your favorite pieces of art. Fraimic lets you display AI slop.
It’s an E Ink picture frame with a microphone and voice controls, so you can describe whatever picture you’d like, which the frame will then generate using OpenAI’s GPT Image 1.5 model. The frame itself starts at $399, which gets you 100 image generations each year, with the option to buy more if you run out.
What makes the AI in Fraimic so dubious is that it might be a pretty great product without it. The E Ink panel looks great, you can use it to show off your own pictures and photos too, and it uses so little power that it can run for years without being plugged in. We’d just love it a lot more without the added slop.

Infinix, a smaller phone manufacturer that’s had success across Asia for its affordable phones, didn’t launch any actual new products at CES this year, but it did bring five concepts that could fit into future phones. Some are clever, like various color-changing rear finishes and a couple of liquid-cooling designs. And then there’s the AI ModuVerse.
Modular phone concepts are nothing new, so the AI hook is what makes ModuVerse unique — in theory. One of the “Modus” makes sense: a meeting attachment that connects magnetically, generating AI transcripts and live translation onto a mini display on the back.
But when I asked what made everything else AI, Infinix didn’t really have any good answers. The gimbal camera has AI stabilization, the vlogging lens uses AI to detect faces, and the microphone has AI voice isolation — all technically AI-based, but not in any way that’s interesting. As for the magnetic, stackable power banks, Infinix’s reps eventually admitted they don’t really have any AI at all. Color me shocked.

There’s a growing trend for AI and robotic cooking hardware — The Verge’s Jen Tuohy reviewed a $1,500 robot chef just last month — but Wan AIChef is something altogether less impressive: an AI-enabled microwave.
It runs on what looks suspiciously like Android, with recipe suggestions, cooking instructions, and a camera inside so you can see the progress of what you’re making. But… it’s just a microwave. So it can’t actually do any cooking for you, other than warm up your food to just the right temperature (well, just right plus or minus 3 degrees Celsius, to be accurate).
It’ll do meal plans and food tracking and calorie counting too, which all sounds great so long as you’re willing to commit to eating all of your meals out of the AI microwave. Please, I beg you, do not eat all of your meals out of the AI microwave.

The tech industry absolutely loves reinventing the vending machine and branding it either robotics or AI, and AI Barmen is no different.
This setup — apparently already in use for private parties and corporate events — is really just an automatic cocktail machine with a few AI smarts slapped on top.
The AI uses the connected webcam to estimate your age — it was off by eight years in my case — and confirm you’re sober enough to get another drink. It can also create custom drinks, with mixed success: When asked for something to “fuck me up,” it came up with the Funky Tequila Fizz, aka tequila, triple sec, and soda. What, no absinthe?

Photo: Dominic Preston / The Verge
Should you buy your kid an AI toy that gives it a complete LLM-powered chatbot to speak to? Probably not. But what if that AI chatbot looked like chibi Elon Musk?
He’s just one of the many avatars offered by the Luka AI Cube, including Hayao Miyazaki, Steve from Minecraft, and Harry Potter. Kids can chat to them about their day, ask for advice, or even share the AI Cube’s camera feed to show the AI avatars where they are and what they’re up to. Luka says it’s a tool for fun, but also learning, with various educational activities and language options.
The elephant in the room is whether you should trust any company’s guardrails enough to give a young kid access to an LLM. Leading with an AI take on Elon Musk — whose own AI, Grok, is busy undressing children as we speak — doesn’t exactly inspire confidence.
Technology
Anthropic wants you to use Claude to ‘Cowork’ in latest AI agent push
Anthropic wants to expand Claude’s AI agent capabilities and take advantage of the growing hype around Claude Code — and it’s doing it with a brand-new feature released Monday, dubbed “Claude Cowork.”
“Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks,” Anthropic wrote in a blog post. The company is releasing it as a “research preview” so the team can learn more about how people use it and continue building accordingly. So far, Cowork is only available via Claude’s macOS app, and only for subscribers of Anthropic’s power-user tier, Claude Max, which costs $100 to $200 per month depending on usage.
Here’s how Claude Cowork works: A user gives Claude access to a folder on their computer, allowing the chatbot to read, edit, or create files. (Examples Anthropic gave included the ability fo “re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.”) Claude will provide regular updates on what it’s working on, and users can also use existing connectors to link it to external info (like Asana, Notion, PayPal, and other supported partners) or link it to Claude in Chrome for browser-related tasks.
“You don’t need to keep manually providing context or converting Claude’s outputs into the right format,” Anthropic wrote. “Nor do you have to wait for Claude to finish before offering further ideas or feedback: you can queue up tasks and let Claude work through them in parallel. It feels much less like a back-and-forth and much more like leaving messages for a coworker.”
The new feature is part of Anthropic’s (and its competitors’) bid to provide the most actually useful AI agents, both for consumers and enterprise. AI agents have come a long way from their humble beginnings as mostly-theoretically-useful tools, but there’s still much more development needed before you’ll see your non-tech-industry friends using them to complete everyday tasks.
Anthropic’s “Skills for Claude,” announced in October, was a partial precursor to Cowork. Starting in October, Claude could improve at personalized tasks and jobs, by way of “folders that include instructions, scripts, and resources that Claude can load when needed to make it smarter at specific work tasks — from working with Excel [to] following your organization’s brand guidelines,” per a release at the time. People could also build their own Skills for Claude relative to their specific jobs and tasks they needed to be completed.
As part of the announcement, Anthropic warned about the potential dangers of using Cowork and other AI agent tools, namely the fact that if instructions aren’t clear, Claude does have the ability to delete local files and take other “potentially destructive actions” — and that with prompt injection attacks, there are a range of potential safety concerns. Prompt injection attacks often involve bad actors hiding malicious text in a website that the model is referencing, which instructs the model to bypass its safeguards and do something harmful, such as hand over personal data. “Agent safety — that is, the task of securing Claude’s real-world actions — is still an active area of development in the industry,” Anthropic wrote.
Claude Max subscribers try out the new feature by clicking on “Cowork” in the sidebar of the macOS app. Other users can join the waitlist.
Technology
Robots that feel pain react faster than humans
NEWYou can now listen to Fox News articles!
Touch something hot, and your hand snaps back before you even think. That split second matters.
Sensory nerves in your skin send a rapid signal to your spinal cord, which triggers your muscles right away. Your brain catches up later. Most robots cannot do this. When a humanoid robot touches something harmful, sensor data usually travels to a central processor, waits for analysis and then sends instructions back to the motors. Even tiny delays can lead to broken parts or dangerous interactions.
As robots move into homes, hospitals and workplaces, that lag becomes a real problem.
A robotic skin designed to mimic the human nervous system
Scientists at the Chinese Academy of Sciences and collaborating universities are tackling this challenge with a neuromorphic robotic e-skin, also known as NRE-skin. Instead of acting like a simple pressure pad, this skin works more like a human nervous system. Traditional robot skins can tell when they are touched. They cannot tell whether that touch is harmful. The new e-skin can do both. That difference changes everything.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
CES 2026 SHOWSTOPPERS: 10 GADGETS YOU HAVE TO SEE
A humanoid robot equipped with neuromorphic e-skin reacts instantly to harmful touch, mimicking the human nervous system to prevent damage and improve safety. (Eduardo Parra/Europa Press via Getty Images)
How the neuromorphic e-skin works
The e-skin is built in four layers that mirror how human skin and nerves function. The top layer acts as a protective outer covering, similar to the epidermis. Beneath it sit sensors and circuits that behave like sensory nerves. Even when nothing touches the robot, the skin sends a small electrical pulse to the robot every 75 to 150 seconds. This signal acts like a status check that says everything is fine. When the skin is damaged, that pulse stops. The robot immediately knows where it was injured and alerts its owner. Touch creates another signal. Normal contact sends neural-like spikes to the robot’s central processor for interpretation. However, extreme pressure triggers something different.
How robots detect pain and trigger instant reflexes
If force exceeds a preset threshold, the skin generates a high-voltage spike that goes straight to the motors. This bypasses the central processor entirely. The result is a reflex. The robot can pull its arm away instantly, much like a human does after touching a hot surface. The pain signal only appears when the contact is truly dangerous, which helps prevent overreaction. This local reflex system reduces damage, improves safety and makes interactions feel more natural.
ROBOTS LEARN 1,000 TASKS IN ONE DAY FROM A SINGLE DEMO
Scientists developed a robotic skin that can detect pain and trigger reflexes without waiting for a central processor to respond. (Han Suyuan/China News Service/VCG via Getty Images)
Self-repairing robotic skin makes fixes fast
The design includes another clever feature. The e-skin is made from magnetic patches that fit together like building blocks. If part of the skin gets damaged, an owner can remove the affected patch and snap in a new one within seconds. There is no need to replace the entire surface. That modular approach saves time, lowers costs and keeps robots in service longer.
Why pain-sensing skin matters for real-world robots
Future service robots will need to work close to people. They will assist patients, help older adults and operate safely in crowded spaces. A sense of touch that includes pain and injury detection makes robots more aware and more trustworthy. It also reduces the risk of accidents caused by delayed reactions or sensor overload. The research team says their neural-inspired design improves robotic touch, safety and intuitive human-robot interaction. It is a key step toward robots that behave less like machines and more like responsive partners.
What this technology means for the future of robots
The next challenge is sensitivity. The researchers want the skin to recognize multiple touches at the same time without confusion. If successful, robots could handle complex physical tasks while staying alert to danger across their entire surface. That brings humanoid robots one step closer to acting on instinct.
ROBOT STUNS CROWD AFTER SHOCKING ONSTAGE REVEAL
A new e-skin design allows robots to pull away from dangerous contact in milliseconds, reducing the risk of injury or mechanical failure. (CFOTO/Future Publishing via Getty Images)
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Robots that can feel pain may sound unsettling at first. In reality, it is about protection, speed and safety. By copying how the human nervous system works, scientists are giving robots faster reflexes and better judgment in the physical world. As robots become part of daily life, those instincts could make all the difference.
Would you feel more comfortable around a robot if it could sense pain and react instantly, or does that idea raise new concerns for you? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd
Billy Woods has one of the highest batting averages in the game. Between his solo records like Hiding Places and Maps, and his collaborative albums with Elucid as Armand Hammer, the man has multiple stone-cold classics under his belt. And, while no one would ever claim that Woods’ albums were light-hearted fare (these are not party records), Golliwog represents his darkest to date.
This is not your typical horrorcore record. Others, like Geto Boys, Gravediggaz, and Insane Clown Posse, reach for slasher aesthetics and shock tactics. But what Billy Woods has crafted is more A24 than Blumhouse.
Sure, the first track is called “Jumpscare,” and it opens with the sound of a film reel spinning up, followed by a creepy music box and the line: “Ragdoll playing dead. Rabid dog in the yard, car won’t start, it’s bees in your head.” It’s setting you up for the typical horror flick gimmickry. But by the end, it’s psychological torture. A cacophony of voices forms a bed for unidentifiable screeching noises, and Woods drops what feels like a mission statement:
“The English language is violence, I hotwired it. I got a hold of the master’s tools and got dialed in.”
Throughout the record, Woods turns to his producers to craft not cheap scares, but tension, to make the listener feel uneasy. “Waterproof Mascara” turns a woman’s sobs into a rhythmic motif. On “Pitchforks & Halos” Kenny Segal conjures the aural equivalent of a POV shot of a serial killer. And “All These Worlds are Yours” produced by DJ Haram has more in common with the early industrial of Throbbing Gristle than it does even some of the other tracks on the record, like “Golgotha” which pairs boombap drums with New Orleans funeral horns.
That dense, at times scattered production is paired with lines that juxtapose the real-world horrors of oppression and colonialism, with scenes that feel taken straight from Bring Her Back: “Trapped a housefly in an upside-down pint glass and waited for it to die.” And later, Woods seamlessly transitions from boasting to warning people about turning their backs on the genocide in Gaza on “Corinthians”:
If you never came back from the dead you can’t tell me shit
Twelve billion USD hovering over the Gaza Strip
You don’t wanna know what it cost to live
What it cost to hide behind eyelids
When your back turnt, secret cannibals lick they lips
The record features some of Woods’ deftest lyricism, balancing confrontation with philosophy, horror with emotion. Billy Woods’ Golliwog is available on Bandcamp and on most major streaming services, including Apple Music, Qobuz, Deezer, YouTube Music, and Spotify.
-
Detroit, MI1 week ago2 hospitalized after shooting on Lodge Freeway in Detroit
-
Technology6 days agoPower bank feature creep is out of control
-
Dallas, TX4 days agoAnti-ICE protest outside Dallas City Hall follows deadly shooting in Minneapolis
-
Delaware4 days agoMERR responds to dead humpback whale washed up near Bethany Beach
-
Dallas, TX1 week agoDefensive coordinator candidates who could improve Cowboys’ brutal secondary in 2026
-
Montana2 days agoService door of Crans-Montana bar where 40 died in fire was locked from inside, owner says
-
Iowa6 days agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star
-
Virginia2 days agoVirginia Tech gains commitment from ACC transfer QB