Connect with us

Technology

The DJI Romo robovac had security so poor, this man remotely accessed thousands of them

Published

on

The DJI Romo robovac had security so poor, this man remotely accessed thousands of them

Sammy Azdoufal claims he wasn’t trying to hack every robot vacuum in the world. He just wanted to remote control his brand-new DJI Romo vacuum with a PS5 gamepad, he tells The Verge, because it sounded fun.

But when his homegrown remote control app started talking to DJI’s servers, it wasn’t just one vacuum cleaner that replied. Roughly 7,000 of them, all around the world, began treating Azdoufal like their boss.

He could remotely control them, and look and listen through their live camera feeds, he tells me, saying he tested that out with a friend. He could watch them map out each room of a house, generating a complete 2D floor plan. He could use any robot’s IP address to find its rough location.

“I found my device was just one in an ocean of devices,” he says.

A map like the one I saw, with robots and packets trickling in.
Image: Gonzague Dambricourt
Advertisement

On Tuesday, when he showed me his level of access in a live demo, I couldn’t believe my eyes. Ten, hundreds, thousands of robots reporting for duty, each phoning home MQTT data packets every three seconds to say: their serial number, which rooms they’re cleaning, what they’ve seen, how far they’ve traveled, when they’re returning to the charger, and the obstacles they encountered along the way.

I watched each of these robots slowly pop into existence on a map of the world. Nine minutes after we began, Azdoufal’s laptop had already cataloged 6,700 DJI devices across 24 different countries and collected over 100,000 of their messages. If you add the company’s DJI Power portable power stations, which also phone home to these same servers, Azdoufal had access to over 10,000 devices.

Azdoufal says he could remote-control robovacs and view live video over the internet.

Azdoufal says he could remote-control robovacs and view live video over the internet.

When I say I couldn’t believe my eyes at first, I mean that literally. Azdoufal leads AI strategy at a vacation rental home company; when he told me he reverse engineered DJI’s protocols using Claude Code, I had to wonder whether AI was hallucinating these robots. So I asked my colleague Thomas Ricker, who just finished reviewing the DJI Romo, to pass us its serial number.

With nothing more than that 14-digit number, Azdoufal could not only pull up our robot, he could correctly see it was cleaning the living room and had 80 percent battery life remaining. Within minutes, I watched the robot generate and transmit an accurate floor plan of my colleague’s house, with the correct shape and size of each room, just by typing some digits into a laptop located in a different country.

Here are two maps of Thomas’ living space. Above is what we pulled from DJI’s servers without authentication; below is what the owner sees on their own phone.
Screenshots by The Verge

Here’s a fuller floor plan from Gonzague Dambricourt, who tried out a read-only version of Azdoufal’s tool.
Image: Gonzague Dambricourt (X)

Separately, Azdoufal pulled up his own DJI Romo’s live video feed, completely bypassing its security PIN, then walked into his living room and waved to the camera while I watched. He also says he shared a limited read-only version of his app with Gonzague Dambricourt, CTO at an IT consulting firm in France; Dambricourt tells me the app let him remotely watch his own DJI Romo’s camera feed before he even paired it.

Advertisement

Azdoufal was able to enable all of this without hacking into DJI’s servers, he claims. “I didn’t infringe any rules, I didn’t bypass, I didn’t crack, brute force, whatever.” He says he simply extracted his own DJI Romo’s private token — the key that tells DJI’s servers that you should have access to your own data — and those servers gave him the data of thousands of other people as well. He shows me that he can access DJI’s pre-production server, as well as the live servers for the US, China, and the EU.

DJI has MQTT servers associated with the US, EU, and China. I’m not sure what VG stands for.

DJI has MQTT servers associated with the US, EU, and China. I’m not sure what VG stands for.
Screenshot by Sean Hollister / The Verge

Here’s the good news: On Tuesday, Azdoufal was not able to take our DJI Romo on a joyride through my colleague’s house, see through its camera, or listen through its microphone. DJI had already restricted that form of access after both Azdoufal and I told the company about the vulnerabilities.

And by Wednesday morning, Azdoufal’s scanner no longer had access to any robots, not even his own. It appears that DJI has plugged the gaping hole.

But this incident raises serious questions about DJI’s security and data practices. It will no doubt be used to help retroactively justify fears that led to the Chinese dronemaker getting largely forced out of the US. If Azdoufal could find these robots without even looking for them, will it protect them against people with intent to do harm? If Claude Code can spit out an app that lets you see into someone’s house, what keeps a DJI employee from doing so? And should a robot vacuum cleaner have a microphone? “It’s so weird to have a microphone on a freaking vacuum,” says Azdoufal.

It doesn’t help that when Azdoufal and The Verge contacted DJI about the issue, the company claimed it had fixed the vulnerability when it was actually only partially resolved.

Advertisement

“DJI can confirm the issue was resolved last week and remediation was already underway prior to public disclosure,” reads part of the original statement provided by DJI spokesperson Daisy Kong. We received that statement on Tuesday morning at 12:28PM ET — about half an hour before Azdoufal showed me thousands of robots, including our review unit, reporting for duty.

Not just robovacs — DJI’s power stations also use this system.

Not just robovacs — DJI’s power stations also use this system.
Screenshot by Sean Hollister / The Verge

To be clear, it’s not surprising that a robot vacuum cleaner with a smartphone app would phone home to the cloud. For better or for worse, users currently expect those apps to work outside of their own homes. Unless you’ve built a tunnel into your own home network, that means relaying the data through cloud servers first.

But people who put a camera into their home expect that data to be protected, both in transit and once it reaches the server. Security professionals should know that — but as soon as Azdoufal connected to DJI’s MQTT servers, everything was visible in cleartext. If DJI has merely cut off one particular way into those servers, that may not be enough to protect them if hackers find another way in.

Unfortunately, DJI is far from the only smart home company that’s let people down on security. Hackers took over Ecovacs robot vacuums to chase pets and yell racist slurs in 2024. In 2025, South Korean government agencies reported that Dreame’s X50 Ultra had a flaw that could let hackers view its camera feed in real time, and that another Ecovacs and a Narwal robovac could let hackers view and steal photos from the devices. (Korea’s own Samsung and LG vacuums received high marks, and a Roborock did fine.)

It’s not just vacuums, of course. I still won’t buy a Wyze camera, despite its new security ideas, because that company tried to sweep a remote access vulnerability under the rug instead of warning its customers. I would find it hard to trust Anker’s Eufy after it lied to us about its security, too. But Anker came clean, and sunlight is a good disinfectant.

Advertisement

DJI is not being exceptionally transparent about what happened here, but it did answer almost all our questions. In a new statement to The Verge via spokesperson Daisy Kong, the company now admits “a backend permission validation issue” that could have theoretically let hackers see live video from its vacuums, and it admits that it didn’t fully patch that issue until after we confirmed that issues were still present.

Here’s that whole statement:

DJI identified a vulnerability affecting DJI Home through internal review in late January and initiated remediation immediately. The issue was addressed through two updates, with an initial patch deployed on February 8 and a follow-up update completed on February 10. The fix was deployed automatically, and no user action is required.

The vulnerability involved a backend permission validation issue affecting MQTT-based communication between the device and the server. While this issue created a theoretical potential for unauthorized access to live video of ROMO device, our investigation confirms that actual occurrences were extremely rare. Nearly all identified activity was linked to independent security researchers testing their own devices for reporting purposes, with only a handful of potential exceptions.

The first patch addressed this vulnerability but had not been applied universally across all service nodes. The second patch re-enabled and restarted the remaining service nodes. This has now been fully resolved, and there is no evidence of broader impact. This was not a transmission encryption issue. ROMO device-to-server communication was not transmitted in cleartext and has always been encrypted using TLS. Data associated with ROMO devices, such as those in Europe, is stored on U.S.-based AWS cloud infrastructure.

DJI maintains strong standards for data privacy and security and has established processes for identifying and addressing potential vulnerabilities. The company has invested in industry-standard encryption and operates a longstanding bug bounty program. We have reviewed the findings and recommendations shared by the independent security researchers who contacted us through that program as part of our standard post-remediation process. DJI will continue to implement additional security enhancements as part of its ongoing efforts.

Advertisement

Azdoufal says that even now, DJI hasn’t fixed all the vulnerabilities he’s found. One of them is the ability to view your own DJI Romo video stream without needing its security pin. Another one is so bad I won’t describe it until DJI has more time to fix it. DJI did not immediately promise to do so.

And both Azdoufal and security researcher Kevin Finisterre tell me it’s not enough for the Romo to send encrypted data to a US server, if anyone inside that server can easily read it afterward. “A server being based in the US in no way, shape, or form prevents .cn DJI employees from access,” Finisterre tells me. That seems evident, as Azdoufal lives in Barcelona and was able to see devices in entirely different regions.

“Once you’re an authenticated client on the MQTT broker, if there are no proper topic-level access controls (ACLs), you can subscribe to wildcard topics (e.g., #) and see all messages from all devices in plaintext at the application layer,” says Azdoufal. “TLS does nothing to prevent this — it only protects the pipe, not what’s inside the pipe from other authorized participants.”

When I tell Azdoufal that some may judge him for not giving DJI much time to resolve the issues before going public, he notes that he didn’t hack anything, didn’t expose sensitive data, and isn’t a security professional. He says he was simply livetweeting everything that happened while trying to control his robot with a PS5 gamepad.

“Yes, I don’t follow the rules, but people stick to the bug bounty program for money. I fucking don’t care, I just want this fixed,” he says. “Following the rules to the end would probably make this breach happen for a way longer time, I think.”

Advertisement

He doesn’t believe that DJI truly discovered these issues by itself back in January, and he’s annoyed the company only ever responded to him robotically in DMs on X, instead of answering his emails.

But he is happy about one thing: He can indeed control his Romo with a PlayStation or Xbox gamepad.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Two of my favorite color e-book readers are the cheapest they’ve been in months

Published

on

Two of my favorite color e-book readers are the cheapest they’ve been in months

Color isn’t essential in an e-reader, but let’s be honest, it’s a nice perk that can bring digital books, magazines, comics, cookbooks, and other publications to life. The catch is that color ebook readers tend to be substantially pricier, which makes today’s deals stand out. Right now, the Kindle Colorsoft (16GB) and Kobo Libra Colour are matching their lowest prices to date, with the Amazon e-reader going for $169.99 ($80 off) at Amazon and Best Buy, and the Libra Colour going for $199.99 ($30 off) via Rakuten’s online storefront.

At their core, both are excellent e-readers with 7-inch, 300ppi E Ink displays, which drop to 150ppi when viewing color. The Colorsoft’s display is slightly more vibrant in most instances, but the difference isn’t dramatic. Each also offers IPX8 water resistance, so you don’t need to worry about spills and can comfortably read in the bath or by the pool.

Which one makes more sense for you largely depends on where you buy your books, how much storage you need, and whether you like to take notes. The Colorsoft is great if you’re heavily embedded in Amazon’s ecosystem, as buying and accessing Kindle books is intuitive and doesn’t require any sideloading. As the more affordable option in Amazon’s lineup, the standard Colorsoft delivers a nearly identical reading experience to the Signature Edition, and it supports Amazon’s “Send to Alexa Plus” feature, which lets you send notes or documents to Amazon’s AI-powered assistant for summaries, to-do lists, reminders, and more. The downside is that it lacks wireless charging and an auto-adjusting front light — which are standard on the step-up model — and comes with 16GB of storage instead of 32GB.

That said, if I didn’t already own so many Kindle books, the Libra Colour would be my pick. It offers double the storage at 32GB and includes intuitive physical page-turn buttons. You can also write notes while reading, given that it offers stylus support, and it includes built-in notebook templates, as well as the ability to convert handwriting to typed text. It also supports EPUB and a wider range of file formats, and lets you save articles for offline reading with Instapaper. And it also offers adjustable warm lighting, which makes reading at night a little easier on the eyes.

Continue Reading

Technology

Robot plays tennis with humans in real time

Published

on

Robot plays tennis with humans in real time

NEWYou can now listen to Fox News articles!

A humanoid robot is now rallying tennis shots with a human in real time. It runs without a script or remote control, so it can react instantly on a tennis court.

The robot stands about 4 feet tall, giving it a compact, human-like frame.  Galbot Robotics released a video showing its robot going shot-for-shot with a human player. The system behind it is called LATENT and runs on the Unitree G1.

And it is not just returning the ball. It is moving, adjusting and competing during live play.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

CHINA’S COMPACT HUMANOID ROBOT SHOWS OFF BALANCE AND FLIPS
 

A humanoid robot rallies tennis shots with a human in real time, reacting without scripts or remote control during live play. (Galbot Robotics)

Why this tennis robot is different from others

Most athletic robots you have seen follow scripts. They perform pre-programmed actions or rely on a remote control. This one operates differently. It reacts to a human opponent in real time, tracking fast-moving balls, shifting across the court and returning shots with surprising accuracy. It also adjusts to changing trajectories and unpredictable shots during rallies. Researchers say it can sustain long rallies with millisecond-level reactions and full-body coordination. That marks a major step forward.

How the AI learned to play tennis

Training a robot to play tennis is extremely complex. Tennis involves:

  • Tennis ball speeds can reach up to 67 miles per hour
  • Split-second racket contact
  • Constant movement across a large court

Capturing complete human gameplay data is difficult. So the researchers used a different method.

Training the robot using motion fragments

Instead of recording full matches, they focused on small segments of movement:

Advertisement
  • Forehands
  • Backhands
  • Side steps

They gathered about five hours of motion data from five players. The sessions took place on a compact 10-by-16-foot court. That space is more than 17 times smaller than a standard tennis court.

RESTAURANT ROBOT GOES HAYWIRE, SENDS TABLEWARE FLYING BEFORE BREAKING OUT IN DANCE MOVES
 

Humanoid robots designed by Galbot Robotics select items from a shelf at the Shanghai New Expo Center in Shanghai, China, on July 26, 2025. Galbot Robotics also designed the tennis-playing robot that learns movement fragments and applies them in live competition. (Ying Tang/NurPhoto via Getty Images)

How the robot plays tennis during live rallies

The system first learns individual movements. Then it combines them into coordinated sequences. That allows the robot to:

  • Move toward the ball
  • Strike it with control
  • Recover and reposition

To improve performance, the team trained the model in simulation. They varied physical conditions such as mass, friction and aerodynamics. This helps the robot adapt to real-world unpredictability. As a result, the system responds dynamically instead of following a fixed routine. 

How well does it actually perform against humans?

In testing, the system achieved up to 96% success on forehand shots in simulation. In real-world trials, the robot can sustain rallies with a human and consistently return the ball across the net.

Advertisement

Watching the demo, it appears competitive. At times, the robot places shots away from the human player. That suggests more than a simple reaction. It points toward early forms of decision-making.

There are still limits. The robot can look unstable at times. Its motion is not yet as fluid as a trained athlete. High or unpredictable shots may still present challenges. Even so, the progress is clear.

Why this matters beyond tennis

This breakthrough goes far beyond tennis. It shows how robots can learn complex human skills without perfect data. The same approach could apply to:

  • Football
  • Badminton
  • Industrial work
  • Search and rescue

Any task that lacks complete motion data could benefit from this method. That is the bigger picture.

WORLD’S FASTEST HUMANOID ROBOT RUNS 22 MPH
 

Advertisement

A robot dances at the launch ceremony of a Galbot Robotics retail store in Beijing, China, on August 7, 2025. The company has also designed a 4-foot robot that returns tennis shots with millisecond reactions and full-body coordination. (VCG/VCG via Getty Images)

Could robots compete with humans one day?

The path forward is becoming clearer. Today, the robot rallies. Next, it competes. In time, robots could train with or challenge professional athletes. Exhibition matches between humans and machines may become part of the sport. That future no longer feels far away.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

This demo shows how quickly things are changing. Robots are no longer stuck following scripts. They can now react, adjust and compete in real situations. What used to feel far off is starting to show up right in front of us.

Advertisement

So here is the question: If a robot could outplay you on the court, would you still want to compete, or would you rather train with it? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading

Technology

AI influencer awards season is upon us

Published

on

AI influencer awards season is upon us

First came the AI beauty pageant. Then the AI music contests. Now, there is an award for AI Personality of the Year — perhaps the inevitable next step for the AI influencer economy as it transforms from quirky novelty into a serious and lucrative industry.

The contest, a joint venture between generative AI studio OpenArt and AI-powered creator platform Fanvue, with backing from AI voice company ElevenLabs, opens on Monday and runs for a month. The organizers said it is intended to “celebrate the creative talent ‘behind’ AI Influencers” and recognize their growing commercial and cultural clout.

Contestants will compete for a total prize fund of $20,000, which will be split between an overall winner and individual categories of fitness, lifestyle, comedian, music and dance entertainer, and fictional cartoon, anime, or fantasy personality. Victors will be celebrated at an event in May that the organizers are dubbing the “‘Oscars’ for AI personalities.”

To enter, you must develop your AI influencer on OpenArt’s platform and submit it at www.AIpersonality.ai. You’ll be asked for social media handles across TikTok, X, YouTube, and Instagram, as well as the story behind the character, your motivations for creating it, and details of any brand work.

Among those assessing contestants are 13‑time Emmy‑winning comedy writer Gil Rief, the creators of Spanish AI model Aitana Lopez, and Christopher “Topher” Townsend, the MAGA rapper behind AI-generated gospel singer Solomon Ray. According to a copy of the judges’ briefing seen by The Verge, contestants will be scored on four criteria: quality, social clout, brand appeal, and the inspiration behind the avatar. Specific points include reliably engaging with followers, portraying a consistent look across social channels, accurate details like having the “right number of fingers and thumbs,” and having “an authentic narrative” behind the avatar.

Advertisement

The contest is open to established creators and novices alike, though existing AI influencers will still need to submit material produced on OpenArt’s platform, Matt Jones, head of brand at Fanvue, told The Verge.

Despite being designed to celebrate creators of virtual influencers, Jones said that entrants don’t need to publicly identify themselves. “If a person who created this amazing piece of work wants nothing to do with the press or to expose themselves or to have their name out there, that’s obviously fine,” he said. “There would be no need to thrust anybody into the limelight here. We would just celebrate the piece of work.”

That creators can remain anonymous feels odd for a contest judging authenticity, particularly in an AI influencer ecosystem built on fictional people, fake personas, and fabricated backstories. That same anonymity has also helped grifts flourish with little accountability, from the AI white nationalist rapper Danny Bones to MAGA fantasy girl Jessica Foster.

There’s familiar baggage too, including persistent questions about originality, whether AI-generated work, or even a likeness, has been lifted from real creators, and whether these tools simply reproduce the same old biases in synthetic form. Organizer Fanvue has already faced criticism for this in the past: in 2024, a Guardian columnist described its “Miss AI” beauty pageant as something that “take(s) every toxic gendered beauty norm and bundle(s) them up into a completely unrealistic package.”

To Fanvue’s Jones, creators inevitably leave something of themselves in the AI characters they make. “You can’t help but put a little bit of yourself into the stories that you tell and the characters that you make,” he said, urging creators to “lean into that.” The idea feels at home in the influencer economy: not strictly real, but a form of synthetic authenticity the internet already knows how to handle.

Advertisement
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Continue Reading

Trending