There were times I wasn’t sure the Rabbit R1 was even a real thing. The AI-powered, Teenage Engineering-designed device came out of nowhere to become one of the biggest stories at CES, promising a level of fun and whimsy that felt much better than some of the more self-serious AI companies out there. CEO Jesse Lyu practically promised the world in this $199 device.
Technology
A morning with the Rabbit R1: a fun, funky, unfinished AI gadget
Well, say this for Rabbit: it’s real. Last night, I went to the swanky TWA Hotel in New York City, along with a few hundred reporters, creators, and particularly enthusiastic R1 buyers. After a couple of hours of photo booths, specialty cocktails, and a rousing keynote and demo from Lyu — in which he made near-constant reference to and fun of the Humane AI Pin — we all got our R1s to take home. I’ve been using mine ever since, and I have some thoughts. And some questions.
From a hardware perspective, the R1 screams “kinda meh Android phone.” Here are the salient specs: it’s about three inches tall and wide and a half-inch thick. It weighs 115 grams, which is about two-thirds as much as the iPhone 15. It has a 2.88-inch screen, runs on a 2.3GHz MediaTek MT6765 processor, and has 128 gigs of storage and four gigs of RAM. It has a speaker on the back, two mics on the top, and a SIM card slot on the side right next to the USB-C charging port. It only comes in one color, a hue Rabbit calls “leuchtorange” but is often known as “brilliant orange” or “luminous orange.” It’s definitely orange, and it’s definitely luminous.
At this point, the best way I can describe the R1 is like a Picasso painting of a smartphone: it has most of the same parts, just laid out really differently. Instead of sitting on top or in the back, the R1’s camera sits in a cutout space on the right side of the device, where it can spin its lens to face both toward and away from you.
The R1 is like a Picasso painting of a smartphone
After spending a few hours playing with the device, I have to say: it’s pretty nice. Not luxurious, or even particularly high-end, just silly and fun. Where Humane’s AI Pin feels like a carefully sculpted metal gem, the R1 feels like an old-school MP3 player crossed with a fidget spinner. The wheel spins a little stiffly for my taste but smoothly enough, the screen is a little fuzzy but fine, and the main action button feels satisfying to thump on.
When I first got the device and connected it to Wi-Fi, it then immediately asked me to sign up for an account at Rabbithole, the R1’s web portal. I did that, scanned a QR code with the R1 to get it synced up, and immediately did a software update. I spent that time logging in to the only four external services the R1 currently connects to: Spotify, Uber, DoorDash, and Midjourney.
Once I was eventually up and running, I started chatting with the R1. So far, it does a solid job with basic AI questions: it gave me lots of good information about this week’s NFL draft, found a few restaurants near me, and knew when Herbert Hoover was president. This is all fairly basic ChatGPT stuff, and there’s some definite lag as it fetches answers, but I much prefer the interface to the Humane AI Pin — because there’s a screen, and you can see the thing working so the AI delays don’t feel quite so interminable.
Because there’s a screen, the AI delays don’t feel quite so interminable
Almost immediately, though, I started running into stuff the R1 just can’t do. It can’t send emails or make spreadsheets, though Lyu has been demoing both for months. Rabbithole is woefully unfinished, too, to the point I was trying to tap around on my phone and it was instead moving a cursor around a half-second after every tap. That’s a good reminder that the whole thing is running on a virtual machine storing all your apps and credentials, which still gives me security-related pause.
Oh, and here’s my favorite thing that has happened on the R1 so far: I got it connected to my Spotify account, which is a feature I’m particularly excited about. I asked for “Beyoncé’s new album,” and the device excitedly went and found me “Crazy in Love” — a lullaby version, from an artist called “Rockabye Baby!” So close and yet so far. It doesn’t seem to be able to find my playlists, either, or skip tracks. When I said, “Play The 1975,” though, that worked fine and quickly. (The speaker, by the way, is very much crappy Android phone quality. You’re going to want to use that Bluetooth connection.)
The R1’s Vision feature, which uses the camera to identify things in the scene around you, seems to work fine as long as all you want is a list of objects in the scene. The device can’t take a photo or video and doesn’t seem to be able to do much else with what it can see.
When you’re not doing anything, the screen shows the time and that bouncing rabbit-head logo. When you press and hold the side button to issue a command, the time and battery fade away, and the rabbit’s ears perk up like it’s listening. It’s very charming! The overall interface is simple and text-based, but it’s odd in spots: it’s not always obvious how to go back, for instance, and you only get to see a line or two of text at a time at the very bottom of the screen, even when there’s a whole paragraph of answer to read.
Rabbit’s roadmap is ambitious: Lyu has spent the last few months talking about all the things the R1’s so-called “Large Action Model” can do, including learning apps and using them for you. During last night’s event, he talked about opening up the USB-C port on the device to allow accessories, keyboards, and more. That’s all coming… eventually. Supposedly. For now, the R1’s feature set is much more straightforward. You can use the device to play music, get answers to questions, translate speech, take notes, summon an Uber, and a few other things.
That means there’s still an awful lot the R1 can’t do and a lot I have left to test. (Anything you want to know about, by the way, let me know!) I’m particularly curious about its battery life, its ability to work with a bad connection, whether it heats up over time, and how it handles more complex tasks than just looking up information and ordering chicken nuggets. But so far, this thing seems like it’s trying to be less like a smartphone killer and more like the beginnings of a useful companion. That’s probably as ambitious as it makes sense to be right now — though Lyu and the Rabbit folks have a lot of big promises to eventually live up to and not a lot of time to do so.
Photography by David Pierce / The Verge
Technology
Birdbuddy’s new smart feeders aim to make spotting birds easier, even for beginners
Birdbuddy is introducing two new smart bird feeders: the flagship Birdbuddy 2 and the more compact, cheaper Birdbuddy 2 Mini aimed at first-time users and smaller outdoor spaces. Both models are designed to be faster and easier to use than previous generations, with upgraded cameras that can shoot in portrait or landscape and wake instantly when a bird lands so you’re less likely to miss the good stuff.
The Birdbuddy 2 costs $199 and features a redesigned circular camera housing that delivers 2K HDR video, slow-motion recording, and a wider 135-degree field of view. The upgraded built-in mic should also better pick up birdsong, which could make identifying species easier using both sound and sight.
The feeder itself offers a larger seed capacity and an integrated perch extender, along with support for both 2.4GHz and 5GHz Wi-Fi for more stable connectivity. The new model also adds dual integrated solar panels to help keep it powered throughout the day, while adding a night sleep mode to conserve power.
The Birdbuddy 2 Mini is designed to deliver the same core AI bird identification and camera experience, but in a smaller, more accessible package. At 6.95 inches tall with a smaller seed capacity, it’s geared toward first-time smart birders and smaller outdoor spaces like balconies, and it supports an optional solar panel.
Birdbuddy 2’s first batch of preorders has already sold out, with shipments expected in February 2026 and wider availability set for mid-2026. Meanwhile, the Birdbuddy 2 Mini will be available to preorder for $129 in mid-2026, with the company planning on shipping the smart bird feeder in late 2026.
Technology
Robots learn 1,000 tasks in one day from a single demo
NEWYou can now listen to Fox News articles!
Most robot headlines follow a familiar script: a machine masters one narrow trick in a controlled lab, then comes the bold promise that everything is about to change. I usually tune those stories out. We have heard about robots taking over since science fiction began, yet real-life robots still struggle with basic flexibility. This time felt different.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
ELON MUSK TEASES A FUTURE RUN BY ROBOTS
Researchers highlight the milestone that shows how a robot learned 1,000 real-world tasks in just one day. (Science Robotics)
How robots learned 1,000 physical tasks in one day
A new report published in Science Robotics caught our attention because the results feel genuinely meaningful, impressive and a little unsettling in the best way. The research comes from a team of academic scientists working in robotics and artificial intelligence, and it tackles one of the field’s biggest limitations.
The researchers taught a robot to learn 1,000 different physical tasks in a single day using just one demonstration per task. These were not small variations of the same movement. The tasks included placing, folding, inserting, gripping and manipulating everyday objects in the real world. For robotics, that is a big deal.
Why robots have always been slow learners
Until now, teaching robots physical tasks has been painfully inefficient. Even simple actions often require hundreds or thousands of demonstrations. Engineers must collect massive datasets and fine-tune systems behind the scenes. That is why most factory robots repeat one motion endlessly and fail as soon as conditions change. Humans learn differently. If someone shows you how to do something once or twice, you can usually figure it out. That gap between human learning and robot learning has held robotics back for decades. This research aims to close that gap.
THE NEW ROBOT THAT COULD MAKE CHORES A THING OF THE PAST
The research team behind the study focuses on teaching robots to learn physical tasks faster and with less data. (Science Robotics)
How the robot learned 1,000 tasks so fast
The breakthrough comes from a smarter way of teaching robots to learn from demonstrations. Instead of memorizing entire movements, the system breaks tasks into simpler phases. One phase focuses on aligning with the object, and the other handles the interaction itself. This method relies on artificial intelligence, specifically an AI technique called imitation learning that allows robots to learn physical tasks from human demonstrations.
The robot then reuses knowledge from previous tasks and applies it to new ones. This retrieval-based approach allows the system to generalize rather than start from scratch each time. Using this method, called Multi-Task Trajectory Transfer, the researchers trained a real robot arm on 1,000 distinct everyday tasks in under 24 hours of human demonstration time.
Importantly, this was not done in a simulation. It happened in the real world, with real objects, real mistakes and real constraints. That detail matters.
Why this research feels different
Many robotics papers look impressive on paper but fall apart outside perfect lab conditions. This one stands out because it tested the system through thousands of real-world rollouts. The robot also showed it could handle new object instances it had never seen before. That ability to generalize is what robots have been missing. It is the difference between a machine that repeats and one that adapts.
AI VIDEO TECH FAST-TRACKS HUMANOID ROBOT TRAINING
The robot arm practices everyday movements like gripping, folding and placing objects using a single human demonstration. (Science Robotics)
A long-standing robotics problem may finally be cracking
This research addresses one of the biggest bottlenecks in robotics: inefficient learning from demonstrations. By decomposing tasks and reusing knowledge, the system achieved an order of magnitude improvement in data efficiency compared to traditional approaches. That kind of leap rarely happens overnight. It suggests that the robot-filled future we have talked about for years may be nearer than it looked even a few years ago.
What this means for you
Faster learning changes everything. If robots need less data and less programming, they become cheaper and more flexible. That opens the door to robots working outside tightly controlled environments.
In the long run, this could enable home robots to learn new tasks from simple demonstrations instead of specialist code. It also has major implications for healthcare, logistics and manufacturing.
More broadly, it signals a shift in artificial intelligence. We are moving away from flashy tricks and toward systems that learn in more human-like ways. Not smarter than people. Just closer to how we actually operate day to day.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
Robots learning 1,000 tasks in a day does not mean your house will have a humanoid helper tomorrow. Still, it represents real progress on a problem that has limited robotics for decades. When machines start learning more like humans, the conversation changes. The question shifts from what robots can repeat to what they can adapt to next. That shift is worth paying attention to.
If robots can now learn like us, what tasks would you actually trust one to handle in your own life? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Plaud updates the NotePin with a button
Plaud has updated its compact NotePin AI recorder. The new NotePin S is almost identical to the original, except for one major difference: a button. It’s joined by a new Plaud Desktop app for recording audio in online meetings, which is free to owners of any Plaud Note or NotePin.
The NotePin S has the same FitBit-esque design as the 2024 original and ships with a lanyard, wristband, clip, and magnetic pin, so you can wear it just about any way you please — now all included in the box, whereas before the lanyard and wristband were sold separately.
It’s about the same size as the NotePin, comes in the same colors (black, purple, or silver), offers similar battery life, and still supports Apple Find My. Like the NotePin, it records audio and generates transcriptions and summaries, whether those are meeting notes, action points, or reminders.
But now it has a button. Whereas the first NotePin used haptic controls, relying on a long squeeze to start recording, with a short buzz to let you know it worked, the S switches to something simpler. A long press of the button starts recording, a short tap adds highlight markers. Plaud’s explanation for the change is simple: buttons are less ambiguous, so you’ll always know you’ve successfully pressed it and started recording, whereas original NotePin users complained they sometimes failed to record because they hadn’t squeezed just right.
AI recorders like this live or die by ease of use, so removing a little friction gives Plaud better odds of survival.
Alongside the NotePin S, Plaud is launching a new Mac and PC application for recording the audio from online meetings. Plaud Desktop runs in the background and activates whenever it detects calls from apps including Zoom, Meet, and Teams, recording both system audio and from your microphone. You can set it to either record meetings automatically or require manual activation, and unlike some alternatives it doesn’t create a bot that joins the call with you.
Recordings and notes are synced with those from Plaud’s line of hardware recorders, with the same models used for transcription and generation, creating a “seamless” library of audio from your meetings, both online and off.
Plaud Desktop is available now and is free to anyone who already owns a Plaud Note or NotePin device. The new NotePin S is also available today, for $179 — $20 more than the original, which Plaud says will now be phased out.
-
World1 week agoHamas builds new terror regime in Gaza, recruiting teens amid problematic election
-
Indianapolis, IN1 week agoIndianapolis Colts playoffs: Updated elimination scenario, AFC standings, playoff picture for Week 17
-
Business1 week agoGoogle is at last letting users swap out embarrassing Gmail addresses without losing their data
-
Southeast1 week agoTwo attorneys vanish during Florida fishing trip as ‘heartbroken’ wife pleads for help finding them
-
Politics1 week agoMost shocking examples of Chinese espionage uncovered by the US this year: ‘Just the tip of the iceberg’
-
News1 week agoRoads could remain slick, icy Saturday morning in Philadelphia area, tracking another storm on the way
-
World1 week agoPodcast: The 2025 EU-US relationship explained simply
-
News1 week agoMarijuana rescheduling would bring some immediate changes, but others will take time