Connect with us

Technology

Google Fast Pair flaw lets hackers hijack headphones

Published

on

Google Fast Pair flaw lets hackers hijack headphones

NEWYou can now listen to Fox News articles!

Google designed Fast Pair to make Bluetooth connections fast and effortless. One tap replaces menus, codes and manual pairing. That convenience now comes with serious risk. Security researchers at KU Leuven uncovered flaws in Google’s Fast Pair protocol that allows silent device takeovers. They named the attack method WhisperPair. An attacker nearby can connect to headphones, earbuds or speakers without the owner knowing. In some cases, the attacker can also track the user’s location. Even more concerning, victims do not need to use Android or own any Google products. iPhone users are also affected.

Sign up for my FREE CyberGuy Report

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

APPLE WARNS MILLIONS OF IPHONES ARE EXPOSED TO ATTACK

Advertisement

Fast Pair makes connecting Bluetooth headphones quick, but researchers found that some devices accept new pairings without proper authorization.       (Kurt “CyberGuy” Knutsson)

What WhisperPair is and how it hijacks Bluetooth devices

Fast Pair works by broadcasting a device’s identity to nearby phones and computers. That shortcut speeds up pairing. Researchers found that many devices ignore a key rule. They still accept new pairings while already connected. That opens the door to abuse.

Within Bluetooth range, an attacker can silently pair with a device in about 10 to 15 seconds. Once connected, they can interrupt calls, inject audio or activate microphones. The attack does not require specialized hardware and can be carried out using a standard phone, laptop, or low-cost device like a Raspberry Pi. According to the researchers, the attacker effectively becomes the device owner.

Audio brands affected by the Fast Pair vulnerability

The researchers tested 17 Fast Pair compatible devices from major brands, including Sony, Jabra, JBL, Marshall, Xiaomi, Nothing, OnePlus, Soundcore, Logitech and Google. Most of these products passed Google certification testing. That detail raises uncomfortable questions about how security checks are performed.

How headphones can become tracking devices

Some affected models create an even bigger privacy issue. Certain Google and Sony devices integrate with Find Hub, which uses nearby devices to estimate location. If a headset has never been linked to a Google account, an attacker can claim it first. That allows continuous tracking of the user’s movements. If the victim later receives a tracking alert, it may appear to reference their own device. That makes the warning easy to dismiss as an error.

Advertisement

GOOGLE NEST STILL SENDS DATA AFTER REMOTE CONTROL CUTOFF, RESEARCHER FINDS

Attacker’s dashboard with location from the Find Hub network. (KU Leuven)

Why many Fast Pair devices may stay vulnerable

There is another problem most users never consider. Headphones and speakers require firmware updates. Those updates usually arrive through brand-specific apps that many people never install. If you never download the app, you never see the update. That means vulnerable devices could remain exposed for months or even years.

The only way to fix this vulnerability is by installing a software update issued by the device manufacturer. While many companies have released patches, updates may not yet be available for every affected model. Users should check directly with the manufacturer to confirm whether a security update exists for their specific device.

Why convenience keeps creating security gaps

Bluetooth itself was not the problem. The flaw lives in the convenience layer built on top of it. Fast Pair prioritized speed over strict ownership enforcement. Researchers argue that pairing should require cryptographic proof of ownership. Without it, convenience features become attack surfaces. Security and ease of use do not have to conflict. But they must be designed together.

Advertisement

Google responds to the Fast Pair WhisperPair security flaws

Google says it has been working with researchers to address the WhisperPair vulnerabilities and began sending recommended patches to headphone manufacturers in early September. Google also confirmed that its own Pixel headphones are now patched.

In a statement to CyberGuy, a Google spokesperson said, “We appreciate collaborating with security researchers through our Vulnerability Rewards Program, which helps keep our users safe. We worked with these researchers to fix these vulnerabilities, and we have not seen evidence of any exploitation outside of this report’s lab setting. As a best security practice, we recommend users check their headphones for the latest firmware updates. We are constantly evaluating and enhancing Fast Pair and Find Hub security.”

Google says the core issue stemmed from some accessory makers not fully following the Fast Pair specification. That specification requires accessories to accept pairing requests only when a user has intentionally placed the device into pairing mode. According to Google, failures to enforce that rule contributed to the audio and microphone risks identified by the researchers.

To reduce the risk going forward, Google says it updated its Fast Pair Validator and certification requirements to explicitly test whether devices properly enforce pairing mode checks. Google also says it provided accessory partners with fixes intended to fully resolve all related issues once applied.

On the location tracking side, Google says it rolled out a server-side fix that prevents accessories from being silently enrolled into the Find Hub network if they have never been paired with an Android device. According to the company, this change addresses the Find Hub tracking risk in that specific scenario across all devices, including Google’s own accessories.

Advertisement

Researchers, however, have raised questions about how quickly patches reach users and how much visibility Google has into real-world abuse that does not involve Google hardware. They also argue that weaknesses in certification allowed flawed implementations to reach the market at scale, suggesting broader systemic issues.

For now, both Google and the researchers agree on one key point. Users must install manufacturer firmware updates to be protected, and availability may vary by device and brand.

SMART HOME HACKING FEARS: WHAT’S REAL AND WHAT’S HYPE

Unwanted tracking notification showing the victim’s own device. (KU Leuven)

How to reduce your risk right now

You cannot disable Fast Pair entirely, but you can lower your exposure.

Advertisement

1) Check if your device is affected

If you use a Bluetooth accessory that supports Google Fast Pair, including wireless earbuds, headphones or speakers, you may be affected. The researchers created a public lookup tool that lets you search for your specific device model and see whether it is vulnerable. Checking your device is a simple first step before deciding what actions to take. Visit whisperpair.eu/vulnerable-devices to see if your device is on the list.

2) Update your audio devices

Install the official app from your headphone or speaker manufacturer. Check for firmware updates and apply them promptly.

3) Avoid pairing in public places

Pair new devices in private spaces. Avoid pairing in airports, cafés or gyms where strangers are nearby.

4) Factory reset if something feels off

Unexpected audio interruptions, strange sounds or dropped connections are warning signs.  A factory reset can remove unauthorized pairings, but it does not fix the underlying vulnerability. A firmware update is still required.

5) Turn off Bluetooth when not needed

Bluetooth only needs to be on during active use. Turning off Bluetooth when not in use limits exposure, but it does not eliminate the underlying risk if the device remains unpatched.

Advertisement

6) Reset secondhand devices

Always factory reset used headphones or speakers before pairing them. This removes hidden links and account associations.

7) Take tracking alerts seriously

Investigate Find Hub or Apple tracking alerts, even if they appear to reference your own device.

8) Keep your phone updated

Install operating system updates promptly. Platform patches can block exploit paths even when accessories lag behind.

Kurt’s key takeaways

WhisperPair shows how small shortcuts can lead to large privacy failures. Headphones feel harmless. Yet, they contain microphones, radios and software that need care and updates. Ignoring them leaves a blind spot that attackers are happy to exploit. Staying secure now means paying attention to the devices you once took for granted.

Should companies be allowed to prioritize fast pairing over cryptographic proof of device ownership? Let us know by writing to us at Cyberguy.com

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Shokz’s bassy OpenRun Pro 2 are $40 off thanks to a new Mother’s Day promo

Published

on

Shokz’s bassy OpenRun Pro 2 are  off thanks to a new Mother’s Day promo

If you’re looking to pick up a pair of open-ear headphones for yourself — or your mom — Shokz is running a Mother’s Day sale. Now through May 10th, the company’s best pair of bone conduction headphones, the OpenRun Pro 2, are available from Amazon, Best Buy, and Shokz for around $139.95 ($40 off), their lowest price of the year. If you purchase direct, you’ll also receive a free waist bag (a $29.99 value).

While traditional headphones tend to block out the world, open-style headphones provide a safer alternative, letting you listen to music and podcasts while remaining vigilant. After testing the OpenRun Pro 2, The Verge’s Victoria Song said using them felt “like the stars finally aligning.” Unlike many open-ear headphones, they don’t skimp on bass or clarity thanks to a dedicated air conduction speaker, though they still won’t rival a traditional pair of in-ears when it comes to sound quality. Still, they’re more comfortable than earlier Shokz models, with flexible ear hooks and a lightweight neckband that creates a secure, natural fit, even for those who wear glasses.

The fact that the Pro 2 vibrate significantly less than other models is another highlight, as is battery life. They offer up to 12 hours on a single charge, which was enough for us to go nearly a week without plugging them in (they charge incredibly fast via USB-C, too). They also include AI-powered noise cancellation for calls (though results were mixed in our testing) and an IP55 rating, making them well-suited for both sweaty workouts and outdoor use.

Other Shokz deals to consider

Continue Reading

Technology

United Arab Emirates plans AI-run government within two years

Published

on

United Arab Emirates plans AI-run government within two years

NEWYou can now listen to Fox News articles!

The United Arab Emirates just made one of the most aggressive moves yet in the global AI race. The country says it will integrate agentic artificial intelligence across half of its government operations within two years.

For context: Most governments are still debating whether to use AI.  This plan puts speed and execution front and center and goes in the opposite direction of how governments typically handle major technology changes.

If it works, the UAE could offer a preview of how AI may reshape public services far beyond the Middle East. If it runs into problems, it could also highlight the risks of moving this fast when government decisions, personal data and public trust are all involved.

Sign up for my FREE CyberGuy Report

Advertisement

UAE AMBASSADOR YOUSEF AL OTAIBA: US AND UAE FORGE GROUNDBREAKING HIGH-TECH PARTNERSHIP BASED ON AI

UAE leaders meet to outline a plan that would bring Agentic AI into core government decision-making and operations. (Dubai Media Office)

  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

What agentic AI means for the UAE government

Agentic AI refers to systems that can analyze information, make decisions and take action with minimal human input. In this model, AI can process requests, adjust workflows and improve outcomes in real time. It can also carry out certain government tasks from start to finish, instead of only suggesting what a person should do next.

So, how would that show up in everyday ways? Think faster permit approvals, automated public services or systems that respond instantly to changes in demand. Instead of waiting for human bottlenecks, processes move continuously.

FOX NEWS AI NEWSLETTER: TRUMP ADMIN UNVEILS GROUNDBREAKING TOOL ‘SUPERCHARGING’ GOV’T EFFICIENCY IN AI

According to the announcement, AI will act more like an operational partner than a tool. That marks a change in how governments think about technology.

Advertisement

How the UAE plans to roll out AI across government

There is also a clear structure behind the rollout. The UAE has put a detailed plan in place with clear expectations from the start. Every ministry and government entity will be evaluated based on how quickly it adopts AI, how well it implements those systems and how effectively it redesigns workflows around them.

Oversight will come from Mansour bin Zayed Al Nahyan, a senior government leader who plays a key role in the country’s executive decision-making. Day-to-day execution will be led by a task force chaired by Mohammad Al Gergawi, a longtime cabinet minister focused on government modernization.

How AI will change government jobs in the UAE

One of the biggest parts of this plan has less to do with machines and more to do with people. Every federal employee will receive AI training. The goal is to build a workforce that can work alongside intelligent systems rather than compete with them.

That matters because large-scale automation often raises concerns about job loss. The UAE is taking a different angle by focusing on reskilling and adaptation. If it works, it could become a model that other countries try to follow. If it struggles, it will highlight how difficult workforce transformation can be at scale.

Why the UAE is moving so fast on AI in government

This move fits into a broader strategy. The UAE has spent years positioning itself as a tech-forward economy. By embedding AI into government operations, the country hopes to improve efficiency, reduce delays and deliver faster services to residents and businesses.

Advertisement

It also sends a signal globally. The UAE wants to set the benchmark for how governments use AI in a big way. That puts pressure on other countries, including the United States, to rethink how quickly we adopt similar technologies.

The UAE plans to use agentic AI to help analyze information, make decisions and carry out tasks across a wide range of government services. (Kurt “CyberGuy” Knutsson)

Concerns about AI in government are already growing

For all the excitement, this kind of rollout raises real concerns. Critics point to accountability as one of the biggest questions. When AI systems start making decisions inside government, it can become harder to understand who is responsible when something goes wrong. Was it the system, the developer or the agency using it?

JOBS THAT ARE MOST AT RISK FROM AI, ACCORDING TO MICROSOFT

Privacy is another sticking point. Government systems already handle sensitive personal data. Expanding AI across those systems could increase how much data is collected, analyzed and stored, which makes some experts uneasy.

Advertisement

There is also the issue of bias. AI models learn from data, and if that data has gaps or flaws, the outcomes can reflect that. In a government setting, that could affect access to services, approvals or enforcement decisions in ways that are not always obvious.

Then there is trust. Even if the systems work as intended, people may still hesitate to accept decisions made by machines, especially when those decisions affect their daily lives.

Supporters argue that these risks can be managed with strong oversight and transparency. Still, critics say the speed of this rollout leaves little room for error, and that is where the debate is likely to intensify.

What this means to you

Even if you do not live in the UAE, this push has real implications. First, it raises expectations. When one government proves it can deliver faster services with AI, people elsewhere will start asking why theirs cannot.

Second, it accelerates the global AI race. Governments will need to balance speed with privacy, security and oversight. Third, it highlights a growing reality. AI is moving into decision-making roles beyond basic support functions. That changes how systems are built and how accountability works.

Advertisement

You may start to see similar experiments here in the United States, especially at the state or city level, where innovation can happen faster.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my quiz here: CyberGuy.com

Kurt’s key takeaways

The UAE is betting big on a future in which AI plays a central role in how its government operates. The timeline is aggressive, and the scope is hard to ignore. What stands out most is how quickly this is moving from concept to execution. At the same time, the questions are just as big as the opportunity. Who is accountable when AI makes a decision? How much data is being used behind the scenes? And how much trust are people willing to place in systems they cannot fully see? This could become a model that other governments try to follow. It could also expose real challenges around transparency and control. Either way, it is a clear signal that AI is moving deeper into systems that affect our everyday lives.

The initiative is set to expand AI across multiple agencies, with a focus on faster services, improved efficiency and real-time operations. (Kurt “CyberGuy” Knutsson)

If AI can start making real-time decisions inside government systems, how comfortable are you with that level of automation showing up in your everyday life? Let us know by writing to us at Cyberguy.com

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report

  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com.  All rights reserved.  

Advertisement
Continue Reading

Technology

Reggie Fils-Aimé says Amazon once asked Nintendo to break the law

Published

on

Reggie Fils-Aimé says Amazon once asked Nintendo to break the law

“Literally, we stopped selling to Amazon, and it’s because I wasn’t going to do something illegal. I wasn’t going to do something that would put at risk the relationship we have with other retailers. But it also set the stage to say, look, you’re not going to push me around. This is the way we do business. And so that’s how, over time, you build respect.”

Continue Reading
Advertisement

Trending