Connect with us

Technology

Ghosts in the Kinect

Published

on

Ghosts in the Kinect

Billy Tolley swings a Microsoft Kinect around an abandoned room in sudden, jittery movements. “Whoa!” he says. “Dude, it was so creepy.” On the display, we see an anomaly of arrows, spheres, and red lines that disappears almost as soon as it arrives. For Tolley and Zak Bagans, two members of the Ghost Adventures YouTube channel, this is enough to suggest they should leave the building. Because for this team and other similar enthusiasts, that seemingly innocuous blotter of white arrows means something more terrifying: a glimpse at specters and phantoms invisible to the human eye.

Fifteen years after its release, just about the only people still buying the Microsoft Kinect are ghost hunters like Tolley and Bagans. Though the body-tracking camera, which was discontinued in 2017, started as a gaming peripheral, it also enjoyed a spirited afterlife outside of video games. But in 2025, its most notable application is helping paranormal investigators, like the Ghost Adventures team, in their attempts at documenting the afterlife.

The Kinect’s ability to convert the data from its body-tracking sensors into an on-screen skeletal dummy delights these investigators, who allege the figures it shows in empty space are, in fact, skeletons of the spooky, scary variety. Looking at it in use — the Kinect is particularly popular with ghost-hunting YouTubers — it’s certainly producing results, showing human-like figures where there are none. The question is: why?

With the help of ghost hunters and those familiar with how the Kinect actually works, The Verge set out to understand why the perhaps most misbegotten gaming peripheral has gained such a strong foothold in the search for the paranormal.

Part of the reason is purely technical. “The Kinect’s popularity as a depth camera for ghost hunting stems from its ability to detect depth and create stick-figure representations of humanoid shapes, making it easier to identify potential human-like forms, even if faint or translucent,” says Sam Ashford, founder of ghost-hunting equipment store SpiritShack.

Advertisement

This is made possible by the first-generation Kinect’s structured light system. By projecting a grid of infrared dots into an environment — even a dark one — and reading the resulting pattern, the Kinect can detect deformations in the projection and, through a machine-learning algorithm, discern human limbs within those deformations. The Kinect then converts that data into a visual representation of a stick figure, which, in its previous life, was pumped back into games like Dance Central and Kinect Sports.

The Kinect isn’t always seeing what it thinks it is

When it was released in 2010, the first-gen Kinect was cutting-edge technology: a high-powered, robust, and lightweight depth camera that condensed what would usually retail upward of $6,000 into a $150 peripheral. Today, you can find a Kinect on eBay for around $20. Ghost hunters, however, typically mount it to a carry handle and a tablet and upsell it for around $400-600, rebranded as a “structured light sensor” (SLS) camera. “The user will direct the camera to a certain point of the room where they believe activity to be present,” says Andy Bailey, founder of a gear shop for ghost hunters called Infraready. “The subject area will be absent of human beings. However, the camera will often calculate and display the presence of a skeletal image.”

Though this is often touted as proof we’re all bound for an eternity haunting aging hotels and abandoned prisons, Bailey urges caution, telling would-be ghost hunters that the cameras are best paired with other equipment to “provide an additional layer of supporting evidence.” For this, Ghost Hunters Equipment, the retail arm of haunted tour operator Ghost Augustine recommends that “EMF readings, temperature, baseline readings, and all of that are essential when considering authentication of paranormal activity.”

That’s because the Kinect isn’t always seeing what it thinks it is. But what is it actually seeing? Did Microsoft, while trying to break into a motion-control market monopolized by the Nintendo Wii, accidentally create a conduit through which we might glimpse the afterlife? Sadly, no.

Advertisement

Photo by Joe Raedle/Getty Images

The Kinect is actually a straightforward piece of hardware. It is trained to recognize the human body, and assumes that it’s always looking at one — because that’s what it’s designed to do. Whatever you show it, whether human or humanoid or something entirely different, it will try and discern human anatomy. If the Kinect is not 100 percent sure of its position, it might even look like the figure it displays is moving. “We may recognise the face of Jesus in a piece of toast or an elephant in a rock formation,” says Jon Wood, a science performer who has a show devoted to examining ghost hunting equipment. “Our brains are trying to make sense of the randomness.” The Kinect does much the same, except it cannot overrule its hunches.

That suits ghost hunters just fine, of course: the Kinect’s habit of finding human shapes where there are none is a crowd-pleaser. The Kinect, deployed in dark rooms bathed in infrared light from cameras and torches, wobbling in the hands of excitable ghost hunters as it tries to read a precise grid of infrared points, is almost guaranteed to show them what they want to see.

Much of ghost hunting depends on ambiguity. If you’re searching for proof of something, be it the afterlife or not, logic suggests you’d want tools that can provide the clearest results, the better to cement the veracity of that proof. Ghost hunters, however, prefer technology that will produce results of any kind: murky recordings on 2000s voice recorders that might be mistaken for voices, low-resolution videos haunted by shadowy artifacts, and any cheap equipment that can call into question the existence of dust (sorry, spirit orbs) — bonus points if battery life is temperamental.

“I’ve watched ghost hunters use two different devices for measuring electromagnetic fields (EMF),” Wood says. “One would be an accurate and expensive Trifled TF2, that never moves unless it actually encounters an electrical field. The other would be a £15 [$18], no-brand, ‘KII’ device with five lights that go berserk when someone so much as sneezes. Which one was more popular, do you think?”

Advertisement

Glitches aren’t tolerated — they’re encouraged

Given the notoriously unreliable skeletal tracking of the Kinect — most non-gaming applications bypass the Kinect’s default SDKs, preferring to process its raw data by other, less error-prone, means — it would be stranger if it didn’t see figures every time it’s deployed. But that’s the point. Like so much technology ghost hunters use, the Kinect’s flaws aren’t bugs or glitches. They’re not tolerated — they’re encouraged.

“If a person pays good money to enjoy a ghost hunt, what are they after?” Wood asks. “They prime themselves for a ‘spooky encounter’ and open up to the suggestion of anything being ‘evidence of a ghost’ — they want to find a ghost, so they make sure they do.”

If it were just the skeletal tracking that ghost hunters were after, better options are now possible with a simple color image. But improved methodology wouldn’t return the false-positives that maintain belief, and so skeletal tracking from 2010 is preferred. None of this is likely to move the needle for those who believe towards something more skeptical. But we do know why the Kinect (or SLS) returns the results it does, and we know it’s not ghosts.

That said, even if its results are erroneous, maybe the Kinect’s new lease on afterlife isn’t a bad thing. Much as ghosts supposedly patrol the same paths over and over until interrupted by ghost hunters, perhaps it’s fitting that the Kinect will continue forevermore to track human bodies — even if the bodies aren’t really there.

Advertisement

Technology

Here’s your first look at Kratos in Amazon’s God of War show

Published

on

Here’s your first look at Kratos in Amazon’s God of War show

Amazon has slowly been teasing out casting details for its live-action adaptation of God of War, and now we have our first look at the show. It’s a single image but a notable one showing protagonist Kratos and his son Atreus. The characters are played by Ryan Hurst and Callum Vinson, respectively, and they look relatively close to their video game counterparts.

There aren’t a lot of other details about the show just yet, but this is Amazon’s official description:

The God of War series storyline follows father and son Kratos and Atreus as they embark on a journey to spread the ashes of their wife and mother, Faye. Through their adventures, Kratos tries to teach his son to be a better god, while Atreus tries to teach his father how to be a better human.

That sounds a lot like the recent soft reboot of the franchise, which started with 2018’s God of War and continued through Ragnarök in 2022. For the Amazon series, Ronald D. Moore, best-known for his work on For All Mankind and Battlestar Galactica, will serve as showrunner. The rest of the cast includes: Mandy Patinkin (Odin), Ed Skrein (Baldur), Max Parker (Heimdall), Ólafur Darri Ólafsson (Thor), Teresa Palmer (Sif), Alastair Duncan (Mimir), Jeff Gulka (Sindri), and Danny Woodburn (Brok).

While production is underway on the God of War series, there’s no word on when it might start streaming.

Advertisement
Continue Reading

Technology

300,000 Chrome users hit by fake AI extensions

Published

on

300,000 Chrome users hit by fake AI extensions

NEWYou can now listen to Fox News articles!

Your web browser may feel like a safe place, especially when you install helpful tools that promise to make your life easier. But security researchers have uncovered a dangerous campaign in which more than 300,000 people installed Chrome extensions pretending to be artificial intelligence (AI) assistants. Instead of helping, these fake tools secretly collect sensitive information like your emails, passwords and browsing activity.

They used familiar names like ChatGPT, Gemini and AI Assistant. If you use Chrome and have installed any AI-related extension, your personal information may already be exposed. Even worse, some of these malicious extensions are still available today, putting more people at risk without their knowing.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

More than 300,000 Chrome users installed fake AI extensions that secretly harvested sensitive data. (Kurt “CyberGuy” Knutsson)

Advertisement

What you need to know about fake AI extensions

Security researchers at browser security company LayerX discovered a large campaign involving 30 malicious Chrome extensions disguised as AI-powered assistants (via BleepingComputer). Together, these extensions were installed more than 300,000 times by unsuspecting users.

Some of the most popular extensions included names like AI Sidebar with 70,000 users, AI Assistant with 60,000 users, ChatGPT Translate with 30,000 users, and Google Gemini with 10,000 users. Another extension called Gemini AI Sidebar had 80,000 users before it was removed.

These extensions were distributed through the official Chrome Web Store, which made them appear legitimate and trustworthy. Even more concerning, researchers found that many of these extensions were connected to the same malicious server, showing they were part of a coordinated effort.

While some extensions have since been removed, others remain available. This means new users could still unknowingly install them and expose their personal data. Here’s the list of the affected extensions:

  • AI Assistant
  • Llama
  • Gemini AI Sidebar
  • AI Sidebar
  • ChatGPT Sidebar
  • Grok
  • Asking ChatGPT
  • ChatGBT
  • Chat Bot GPT
  • Grok Chatbot
  • Chat With Gemini
  • XAI
  • Google Gemini
  • Ask Gemini
  • AI Letter Generator
  • AI Message Generator
  • AI Translator
  • AI For Translation
  • AI Cover Letter Generator
  • AI Image Generator ChatGPT
  • Ai Wallpaper Generator
  • Ai Picture Generator
  • DeepSeek Download
  • AI Email Writer
  • Email Generator AI
  • DeepSeek Chat
  • ChatGPT Picture Generator
  • ChatGPT Translate
  • AI GPT
  • ChatGPT Translation
  • ChatGPT for Gmail

FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

These malicious tools were listed in the official Chrome Web Store, making them appear legitimate and trustworthy. (LayerX)

Advertisement

How the fake AI Chrome extension attack works

These fake extensions pretend to offer helpful AI features, such as translating text, summarizing emails, or acting as an AI assistant. But behind the scenes, they quietly monitor what you are doing online.

Once installed, the extension gains permission to view and interact with the websites you visit. This allows it to read the contents of web pages, including login screens where you enter your username and password.

In some cases, the extensions specifically targeted Gmail. They could read your email messages directly from your browser, including emails you received and even drafts you were still writing. This means attackers could access private conversations, financial information and sensitive personal details.

The extensions then sent this information to servers controlled by the attackers. Because they loaded content remotely, the attackers could change their behavior at any time without needing to update the extension.

Some versions could also activate voice features through your browser. This could potentially capture spoken conversations near your device and send transcripts back to the attackers.

Advertisement

If you installed one of these extensions, attackers may already have access to extremely sensitive information. This includes your email content, login credentials, browsing habits and possibly even voice recordings.

We reached out to Google for comment, and a spokesperson told CyberGuy that the company “can confirm that the extensions from this report have all been removed from the Google Web Store.”

BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK

Once installed, the extensions could read emails, capture passwords, monitor browsing activity and send the data to attacker-controlled servers. (Bildquelle/ullstein bild via Getty Images)

7 ways you can protect yourself from malicious Chrome extensions

If you have ever installed an AI-related Chrome extension, taking a few simple precautions now can help protect your accounts and prevent further damage.

Advertisement

1) Remove any suspicious or unused browser extensions

On a Windows PC or Mac, open Chrome and type chrome://extensions into the address bar. Review every extension listed. If you see anything unfamiliar, especially AI assistants you don’t remember installing, click “Remove” immediately. Malicious extensions depend on going unnoticed. Removing them stops further data collection and cuts off the attacker’s access to your information.

2) Change your passwords

If you installed any suspicious extension, assume your passwords may be compromised. Start by changing your email password first, since email controls access to most other accounts. Then update passwords for banking, shopping and social media accounts. This prevents attackers from using stolen credentials to break into your accounts.

3) Use a password manager to create and protect strong passwords

A password manager generates unique, complex passwords for each account and stores them securely. This prevents attackers from accessing multiple accounts if one password is stolen. Password managers also alert you if your login credentials appear in known data breaches, helping you respond quickly and protect your identity. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

4) Install strong antivirus software and keep it active

Good antivirus software can detect malicious browser extensions, spyware, and other hidden threats. It scans your system for suspicious activity and blocks harmful programs before they can steal your information. This adds an important layer of protection that works continuously in the background to keep your device safe. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

5) Use an identity theft protection service

Identity theft protection services monitor your personal data, including email addresses, financial accounts, and Social Security numbers, for signs of misuse. If criminals try to open accounts or commit fraud using your information, you receive alerts quickly. Early detection allows you to act fast and limit financial and personal damage. See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

Advertisement

6) Keep your browser and computer fully updated

Software updates fix security vulnerabilities that attackers exploit. Enable automatic updates for Chrome and your operating system so you always have the latest protections. These updates strengthen your defenses against malicious extensions and prevent attackers from taking advantage of known weaknesses.

7) Use a personal data removal service

Personal data removal services scan data broker websites that collect and sell your personal information. They help remove your data from these sites, reducing what attackers can find and use against you. Less exposed information means fewer opportunities for criminals to target you with scams, identity theft or phishing attacks.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

Kurt’s key takeaway

Even tools designed to make your life easier can become tools for cybercriminals. Malicious extensions often hide behind trusted names and convincing features, making them difficult to spot. You can significantly reduce your risk by reviewing your browser extensions regularly, removing anything suspicious and using protective tools like password managers and strong antivirus software.

Advertisement

Have you checked your browser extensions recently? Let us know your thoughts by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement

Related Article

Malicious browser extensions hit 4.3M users
Continue Reading

Technology

Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance

Published

on

Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance

Less than 24 hours before the deadline in an ultimatum issued by the Pentagon, Anthropic has refused the Department of Defense’s demands for unrestricted access to its AI.

It’s the culmination of a dramatic exchange of public statements, social media posts, and behind-the-scenes negotiations, coming down to Defense Secretary Pete Hegseth’s desire to renegotiate all AI labs’ current contracts with the military. But Anthropic, so far, has refused to back down from its two current red lines: no mass surveillance of Americans, and no lethal autonomous weapons (or weapons with license to kill targets with no human oversight whatsoever). OpenAI and xAI had reportedly already agreed to the new terms, while Anthropic’s refusal had led to CEO Dario Amodei being summoned to the White House this week for a meeting with Hegseth himself, in which the Secretary reportedly issued an ultimatum to the CEO to back down by the end of business day on Friday or else.

In a statement late Thursday, Amodei wrote, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community.”

He added that the company has “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner” but that in a “narrow set of cases, we believe AI can undermine, rather than defend, democratic values” — going on to specifically mention mass domestic surveillance and fully autonomous weapons. (Amodei mentioned that “partial autonomous weapons … are vital to the defense of democracy” and that fully autonomous weapons may eventually “prove critical for our national defense,” but that “today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” He did not rule out Anthropic acquiescing to the military’s use of fully autonomous weapons in the future but mentioned that they were not ready now.)

The Pentagon had already reportedly asked major defense contractors to assess their dependence on Anthropic’s Claude, which could be seen as the first step to designating the company a “supply chain risk” – a public threat that the Pentagon had made recently (and a classification usually reserved for threats to national security). The Pentagon was also reportedly considering invoking the Defense Production Act to make Anthropic comply.

Advertisement

Amodei wrote in his statement that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” He also wrote that “should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”

Continue Reading

Trending