Connect with us

Technology

I rode in one of the UK’s first self-driving cars

Published

on

I rode in one of the UK’s first self-driving cars

I never really believed self-driving cars would make it to the UK, so you can imagine my surprise when I found myself clambering into one of Wayve’s autonomous vehicles for a journey around north London a few weeks ago.

In June, the company announced plans with Uber to begin trialing Level 4 fully autonomous robotaxis in the capital as soon as 2026, part of a government plan to fast-track self-driving pilots ahead of a potential wider rollout in late 2027. Alphabet-owned Waymo, now a staple fixture of US cities like San Francisco, Los Angeles, and Phoenix, also has its eyes on London, announcing plans for its own fully driverless robotaxi service in 2026, one of its first efforts to expand beyond the US.

My skepticism on whether self-driving cars will work in London isn’t unfounded. On many levels, London is a robotaxi’s worst nightmare. At every possible turn, the city is at odds with autonomy. Its road network is narrow, winding, and hellish to navigate, a morass of concrete that emerged over centuries, designed to be used by horses and carts, not cars. Tight streets make avoiding obstacles — potholes, parked cars, you know the drill — even tougher, and this is before we’ve even started to consider the flood of other vehicles, jaywalkers, tourists, cyclists, buses, taxi cabs, and animals (like rogue military horses) sharing the road. And the less said about roundabouts or the weather, the better.

Even if a robotaxi manages to successfully navigate London, it needs Londoners on board with the technology too. This might be tough. We’re a skeptical bunch and when it comes to putting AI in cars; surveys rank Brits among the world’s worst. There’s also been a lot of hype — and failure — surrounding the technology in the past, leaving a legacy of distrust and disbelief entrants must dispel. And there’s the iconic black cabs to contend with, and they’ve been known to drive a hard bargain. When Uber first came on the scene, cabbies repeatedly brought London to a standstill, and the group is still at war with the ridesharing company today. That said, they don’t seem too threatened this time around, dismissing driverless cars as “a fairground ride” and “a tourist attraction in San Francisco.”

Wayve’s headquarters didn’t feel like a San Francisco tourist attraction. The combination of undecorated brick and black metal fencing gives Wayve, which started life in a Cambridge garage in 2017 and is still led by cofounder Alex Kendall, the vibe of a random warehouse. Just 15 minutes away is King’s Cross, a reformed industrial wasteland now home to companies like Google and Meta, which many would consider a more conventional setting for a company that has raised more than $1 billion from titans like Nvidia, Microsoft, and SoftBank (and is reportedly in talks to raise up to $2 billion more).

Advertisement

Its cars — a fleet of Ford Mustang Mach-Es — didn’t look that futuristic either. The only real giveaway that they planned to replace human drivers was a small box of sensors mounted above the windshield, a far cry from the obtrusive humps on top of Waymos.

Inside, it was just as ordinary. As we rolled out of Wayve’s compound, the only thing that really stood out was the big red emergency stop button in the center console, a reminder that, legally speaking, a human driver needs to be ready to seize control at any moment. If it hadn’t been for the shrill buzz going off to indicate the robotaxi had taken over, I don’t think I’d have noticed the driver had given up any control at all.

It handled the city well — far better than I expected. Within minutes, we’d left the quiet side streets near Wayve’s base and joined a busier road. The car eased between parked cars and delivery vehicles, slowed politely when food couriers cut in front of us on electric bikes, and, mercifully, didn’t mow down any of the jaywalkers who treated London’s crossings more like suggestions than rules.

The ride wasn’t exactly smooth, though, and nothing like the ethereal calm I felt when I took my first Waymo in San Francisco this summer. Wayve was more hesitant than I’m used to, a little like when my sister took me out for the first time after earning her license a few years ago.

That hesitancy is especially odd in London. Friends, cabbies, bus drivers, and Uber drivers I’ve ridden with all seem to exude a kind of impatient confidence, a sense of urgency that Wayve utterly lacked. I’ve not driven since I passed my test 15 years ago — the Tube makes it pretty easy to do without in London — but its pauses still managed to test my patience. Our route took us past the high walls of Pentonville Prison in Islington, and we trundled behind a cyclist I was sure even I could safely overtake and any Londoner certainly would have.

Advertisement

I later learned this tentativeness is a feature, not a bug. Unlike Waymo — which uses a combination of detailed maps, rules, sensors, and AI to drive — Wayve employs an end-to-end AI model that lets it drive in a generalizable way. In other words, Wayve drives more like a human and less like a machine. It certainly felt that way; I kept glancing at the safety driver’s hands, half expecting to see them having already retaken control. They never had. Other drivers seemed convinced too. A policeman even raised his hand in thanks as we left him a space to turn into a petrol station, though maybe that was meant for the safety driver.

In theory, this embodied AI approach means you could drop a Wayve car anywhere and it would simply adapt, similar to the way a human driver might when navigating an unfamiliar city. I’m not sure I’m ready to test that myself, but the team said they’d recently been driving out in the Scottish Highlands and came back unscathed.

I later learned the company, which is targeting markets in Japan, Europe, and North America, has been traveling around the world on an AI “roadshow” this year to test its technology in 500 unfamiliar cities. Knowing this, it seems Wayve will have little need to take The Knowledge, a series of exams for London’s black cab drivers to show they have memorized thousands of streets and places, letting them navigate without GPS (it also makes scientists love their brains).

The approach means the technology is also designed to respond to the world more fluidly and react in a more human manner to those unexpected scenarios and edge cases that terrify autonomous carmakers. On my trip, it did just that. Roadworks, learner drivers, groups of cyclists, and London buses, even a person on crutches veering into the street — it handled each capably, albeit more cautiously than a London driver probably would have. The most nerve-wracking moment came when a blind man edged out with his cane between two parked cars — a scene so on the nose I had to ask the company if it had been staged (it hadn’t) — but before I could react, the car had already slowed and shifted course.

By the time we pulled back into Wayve’s compound, I realized I’d stopped wondering who was driving. It was only the repeat of the shrill buzzer that signaled our safety driver was back in control. My brain, it seems, has finally accepted autonomy, at least London’s version of it. It’s rougher around the edges, less sci-fi, more human. And maybe that’s the point.

Advertisement
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Here’s your first look at Kratos in Amazon’s God of War show

Published

on

Here’s your first look at Kratos in Amazon’s God of War show

Amazon has slowly been teasing out casting details for its live-action adaptation of God of War, and now we have our first look at the show. It’s a single image but a notable one showing protagonist Kratos and his son Atreus. The characters are played by Ryan Hurst and Callum Vinson, respectively, and they look relatively close to their video game counterparts.

There aren’t a lot of other details about the show just yet, but this is Amazon’s official description:

The God of War series storyline follows father and son Kratos and Atreus as they embark on a journey to spread the ashes of their wife and mother, Faye. Through their adventures, Kratos tries to teach his son to be a better god, while Atreus tries to teach his father how to be a better human.

That sounds a lot like the recent soft reboot of the franchise, which started with 2018’s God of War and continued through Ragnarök in 2022. For the Amazon series, Ronald D. Moore, best-known for his work on For All Mankind and Battlestar Galactica, will serve as showrunner. The rest of the cast includes: Mandy Patinkin (Odin), Ed Skrein (Baldur), Max Parker (Heimdall), Ólafur Darri Ólafsson (Thor), Teresa Palmer (Sif), Alastair Duncan (Mimir), Jeff Gulka (Sindri), and Danny Woodburn (Brok).

While production is underway on the God of War series, there’s no word on when it might start streaming.

Advertisement
Continue Reading

Technology

300,000 Chrome users hit by fake AI extensions

Published

on

300,000 Chrome users hit by fake AI extensions

NEWYou can now listen to Fox News articles!

Your web browser may feel like a safe place, especially when you install helpful tools that promise to make your life easier. But security researchers have uncovered a dangerous campaign in which more than 300,000 people installed Chrome extensions pretending to be artificial intelligence (AI) assistants. Instead of helping, these fake tools secretly collect sensitive information like your emails, passwords and browsing activity.

They used familiar names like ChatGPT, Gemini and AI Assistant. If you use Chrome and have installed any AI-related extension, your personal information may already be exposed. Even worse, some of these malicious extensions are still available today, putting more people at risk without their knowing.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

More than 300,000 Chrome users installed fake AI extensions that secretly harvested sensitive data. (Kurt “CyberGuy” Knutsson)

Advertisement

What you need to know about fake AI extensions

Security researchers at browser security company LayerX discovered a large campaign involving 30 malicious Chrome extensions disguised as AI-powered assistants (via BleepingComputer). Together, these extensions were installed more than 300,000 times by unsuspecting users.

Some of the most popular extensions included names like AI Sidebar with 70,000 users, AI Assistant with 60,000 users, ChatGPT Translate with 30,000 users, and Google Gemini with 10,000 users. Another extension called Gemini AI Sidebar had 80,000 users before it was removed.

These extensions were distributed through the official Chrome Web Store, which made them appear legitimate and trustworthy. Even more concerning, researchers found that many of these extensions were connected to the same malicious server, showing they were part of a coordinated effort.

While some extensions have since been removed, others remain available. This means new users could still unknowingly install them and expose their personal data. Here’s the list of the affected extensions:

  • AI Assistant
  • Llama
  • Gemini AI Sidebar
  • AI Sidebar
  • ChatGPT Sidebar
  • Grok
  • Asking ChatGPT
  • ChatGBT
  • Chat Bot GPT
  • Grok Chatbot
  • Chat With Gemini
  • XAI
  • Google Gemini
  • Ask Gemini
  • AI Letter Generator
  • AI Message Generator
  • AI Translator
  • AI For Translation
  • AI Cover Letter Generator
  • AI Image Generator ChatGPT
  • Ai Wallpaper Generator
  • Ai Picture Generator
  • DeepSeek Download
  • AI Email Writer
  • Email Generator AI
  • DeepSeek Chat
  • ChatGPT Picture Generator
  • ChatGPT Translate
  • AI GPT
  • ChatGPT Translation
  • ChatGPT for Gmail

FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

These malicious tools were listed in the official Chrome Web Store, making them appear legitimate and trustworthy. (LayerX)

Advertisement

How the fake AI Chrome extension attack works

These fake extensions pretend to offer helpful AI features, such as translating text, summarizing emails, or acting as an AI assistant. But behind the scenes, they quietly monitor what you are doing online.

Once installed, the extension gains permission to view and interact with the websites you visit. This allows it to read the contents of web pages, including login screens where you enter your username and password.

In some cases, the extensions specifically targeted Gmail. They could read your email messages directly from your browser, including emails you received and even drafts you were still writing. This means attackers could access private conversations, financial information and sensitive personal details.

The extensions then sent this information to servers controlled by the attackers. Because they loaded content remotely, the attackers could change their behavior at any time without needing to update the extension.

Some versions could also activate voice features through your browser. This could potentially capture spoken conversations near your device and send transcripts back to the attackers.

Advertisement

If you installed one of these extensions, attackers may already have access to extremely sensitive information. This includes your email content, login credentials, browsing habits and possibly even voice recordings.

We reached out to Google for comment, and a spokesperson told CyberGuy that the company “can confirm that the extensions from this report have all been removed from the Google Web Store.”

BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK

Once installed, the extensions could read emails, capture passwords, monitor browsing activity and send the data to attacker-controlled servers. (Bildquelle/ullstein bild via Getty Images)

7 ways you can protect yourself from malicious Chrome extensions

If you have ever installed an AI-related Chrome extension, taking a few simple precautions now can help protect your accounts and prevent further damage.

Advertisement

1) Remove any suspicious or unused browser extensions

On a Windows PC or Mac, open Chrome and type chrome://extensions into the address bar. Review every extension listed. If you see anything unfamiliar, especially AI assistants you don’t remember installing, click “Remove” immediately. Malicious extensions depend on going unnoticed. Removing them stops further data collection and cuts off the attacker’s access to your information.

2) Change your passwords

If you installed any suspicious extension, assume your passwords may be compromised. Start by changing your email password first, since email controls access to most other accounts. Then update passwords for banking, shopping and social media accounts. This prevents attackers from using stolen credentials to break into your accounts.

3) Use a password manager to create and protect strong passwords

A password manager generates unique, complex passwords for each account and stores them securely. This prevents attackers from accessing multiple accounts if one password is stolen. Password managers also alert you if your login credentials appear in known data breaches, helping you respond quickly and protect your identity. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

4) Install strong antivirus software and keep it active

Good antivirus software can detect malicious browser extensions, spyware, and other hidden threats. It scans your system for suspicious activity and blocks harmful programs before they can steal your information. This adds an important layer of protection that works continuously in the background to keep your device safe. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

5) Use an identity theft protection service

Identity theft protection services monitor your personal data, including email addresses, financial accounts, and Social Security numbers, for signs of misuse. If criminals try to open accounts or commit fraud using your information, you receive alerts quickly. Early detection allows you to act fast and limit financial and personal damage. See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

Advertisement

6) Keep your browser and computer fully updated

Software updates fix security vulnerabilities that attackers exploit. Enable automatic updates for Chrome and your operating system so you always have the latest protections. These updates strengthen your defenses against malicious extensions and prevent attackers from taking advantage of known weaknesses.

7) Use a personal data removal service

Personal data removal services scan data broker websites that collect and sell your personal information. They help remove your data from these sites, reducing what attackers can find and use against you. Less exposed information means fewer opportunities for criminals to target you with scams, identity theft or phishing attacks.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

Kurt’s key takeaway

Even tools designed to make your life easier can become tools for cybercriminals. Malicious extensions often hide behind trusted names and convincing features, making them difficult to spot. You can significantly reduce your risk by reviewing your browser extensions regularly, removing anything suspicious and using protective tools like password managers and strong antivirus software.

Advertisement

Have you checked your browser extensions recently? Let us know your thoughts by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement

Related Article

Malicious browser extensions hit 4.3M users
Continue Reading

Technology

Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance

Published

on

Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance

Less than 24 hours before the deadline in an ultimatum issued by the Pentagon, Anthropic has refused the Department of Defense’s demands for unrestricted access to its AI.

It’s the culmination of a dramatic exchange of public statements, social media posts, and behind-the-scenes negotiations, coming down to Defense Secretary Pete Hegseth’s desire to renegotiate all AI labs’ current contracts with the military. But Anthropic, so far, has refused to back down from its two current red lines: no mass surveillance of Americans, and no lethal autonomous weapons (or weapons with license to kill targets with no human oversight whatsoever). OpenAI and xAI had reportedly already agreed to the new terms, while Anthropic’s refusal had led to CEO Dario Amodei being summoned to the White House this week for a meeting with Hegseth himself, in which the Secretary reportedly issued an ultimatum to the CEO to back down by the end of business day on Friday or else.

In a statement late Thursday, Amodei wrote, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community.”

He added that the company has “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner” but that in a “narrow set of cases, we believe AI can undermine, rather than defend, democratic values” — going on to specifically mention mass domestic surveillance and fully autonomous weapons. (Amodei mentioned that “partial autonomous weapons … are vital to the defense of democracy” and that fully autonomous weapons may eventually “prove critical for our national defense,” but that “today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” He did not rule out Anthropic acquiescing to the military’s use of fully autonomous weapons in the future but mentioned that they were not ready now.)

The Pentagon had already reportedly asked major defense contractors to assess their dependence on Anthropic’s Claude, which could be seen as the first step to designating the company a “supply chain risk” – a public threat that the Pentagon had made recently (and a classification usually reserved for threats to national security). The Pentagon was also reportedly considering invoking the Defense Production Act to make Anthropic comply.

Advertisement

Amodei wrote in his statement that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” He also wrote that “should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”

Continue Reading

Trending