Connect with us

Technology

I rode in one of the UK’s first self-driving cars

Published

on

I rode in one of the UK’s first self-driving cars

I never really believed self-driving cars would make it to the UK, so you can imagine my surprise when I found myself clambering into one of Wayve’s autonomous vehicles for a journey around north London a few weeks ago.

In June, the company announced plans with Uber to begin trialing Level 4 fully autonomous robotaxis in the capital as soon as 2026, part of a government plan to fast-track self-driving pilots ahead of a potential wider rollout in late 2027. Alphabet-owned Waymo, now a staple fixture of US cities like San Francisco, Los Angeles, and Phoenix, also has its eyes on London, announcing plans for its own fully driverless robotaxi service in 2026, one of its first efforts to expand beyond the US.

My skepticism on whether self-driving cars will work in London isn’t unfounded. On many levels, London is a robotaxi’s worst nightmare. At every possible turn, the city is at odds with autonomy. Its road network is narrow, winding, and hellish to navigate, a morass of concrete that emerged over centuries, designed to be used by horses and carts, not cars. Tight streets make avoiding obstacles — potholes, parked cars, you know the drill — even tougher, and this is before we’ve even started to consider the flood of other vehicles, jaywalkers, tourists, cyclists, buses, taxi cabs, and animals (like rogue military horses) sharing the road. And the less said about roundabouts or the weather, the better.

Even if a robotaxi manages to successfully navigate London, it needs Londoners on board with the technology too. This might be tough. We’re a skeptical bunch and when it comes to putting AI in cars; surveys rank Brits among the world’s worst. There’s also been a lot of hype — and failure — surrounding the technology in the past, leaving a legacy of distrust and disbelief entrants must dispel. And there’s the iconic black cabs to contend with, and they’ve been known to drive a hard bargain. When Uber first came on the scene, cabbies repeatedly brought London to a standstill, and the group is still at war with the ridesharing company today. That said, they don’t seem too threatened this time around, dismissing driverless cars as “a fairground ride” and “a tourist attraction in San Francisco.”

Wayve’s headquarters didn’t feel like a San Francisco tourist attraction. The combination of undecorated brick and black metal fencing gives Wayve, which started life in a Cambridge garage in 2017 and is still led by cofounder Alex Kendall, the vibe of a random warehouse. Just 15 minutes away is King’s Cross, a reformed industrial wasteland now home to companies like Google and Meta, which many would consider a more conventional setting for a company that has raised more than $1 billion from titans like Nvidia, Microsoft, and SoftBank (and is reportedly in talks to raise up to $2 billion more).

Advertisement

Its cars — a fleet of Ford Mustang Mach-Es — didn’t look that futuristic either. The only real giveaway that they planned to replace human drivers was a small box of sensors mounted above the windshield, a far cry from the obtrusive humps on top of Waymos.

Inside, it was just as ordinary. As we rolled out of Wayve’s compound, the only thing that really stood out was the big red emergency stop button in the center console, a reminder that, legally speaking, a human driver needs to be ready to seize control at any moment. If it hadn’t been for the shrill buzz going off to indicate the robotaxi had taken over, I don’t think I’d have noticed the driver had given up any control at all.

It handled the city well — far better than I expected. Within minutes, we’d left the quiet side streets near Wayve’s base and joined a busier road. The car eased between parked cars and delivery vehicles, slowed politely when food couriers cut in front of us on electric bikes, and, mercifully, didn’t mow down any of the jaywalkers who treated London’s crossings more like suggestions than rules.

The ride wasn’t exactly smooth, though, and nothing like the ethereal calm I felt when I took my first Waymo in San Francisco this summer. Wayve was more hesitant than I’m used to, a little like when my sister took me out for the first time after earning her license a few years ago.

That hesitancy is especially odd in London. Friends, cabbies, bus drivers, and Uber drivers I’ve ridden with all seem to exude a kind of impatient confidence, a sense of urgency that Wayve utterly lacked. I’ve not driven since I passed my test 15 years ago — the Tube makes it pretty easy to do without in London — but its pauses still managed to test my patience. Our route took us past the high walls of Pentonville Prison in Islington, and we trundled behind a cyclist I was sure even I could safely overtake and any Londoner certainly would have.

Advertisement

I later learned this tentativeness is a feature, not a bug. Unlike Waymo — which uses a combination of detailed maps, rules, sensors, and AI to drive — Wayve employs an end-to-end AI model that lets it drive in a generalizable way. In other words, Wayve drives more like a human and less like a machine. It certainly felt that way; I kept glancing at the safety driver’s hands, half expecting to see them having already retaken control. They never had. Other drivers seemed convinced too. A policeman even raised his hand in thanks as we left him a space to turn into a petrol station, though maybe that was meant for the safety driver.

In theory, this embodied AI approach means you could drop a Wayve car anywhere and it would simply adapt, similar to the way a human driver might when navigating an unfamiliar city. I’m not sure I’m ready to test that myself, but the team said they’d recently been driving out in the Scottish Highlands and came back unscathed.

I later learned the company, which is targeting markets in Japan, Europe, and North America, has been traveling around the world on an AI “roadshow” this year to test its technology in 500 unfamiliar cities. Knowing this, it seems Wayve will have little need to take The Knowledge, a series of exams for London’s black cab drivers to show they have memorized thousands of streets and places, letting them navigate without GPS (it also makes scientists love their brains).

The approach means the technology is also designed to respond to the world more fluidly and react in a more human manner to those unexpected scenarios and edge cases that terrify autonomous carmakers. On my trip, it did just that. Roadworks, learner drivers, groups of cyclists, and London buses, even a person on crutches veering into the street — it handled each capably, albeit more cautiously than a London driver probably would have. The most nerve-wracking moment came when a blind man edged out with his cane between two parked cars — a scene so on the nose I had to ask the company if it had been staged (it hadn’t) — but before I could react, the car had already slowed and shifted course.

By the time we pulled back into Wayve’s compound, I realized I’d stopped wondering who was driving. It was only the repeat of the shrill buzzer that signaled our safety driver was back in control. My brain, it seems, has finally accepted autonomy, at least London’s version of it. It’s rougher around the edges, less sci-fi, more human. And maybe that’s the point.

Advertisement
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Amazon.com says things are fixed after some issues with logging in and checking out

Published

on

Amazon.com says things are fixed after some issues with logging in and checking out

If you were having issues shopping on Amazon or loading your playlists on Amazon Music on Thursday, you weren’t alone. For over three hours today, Downdetector showed a sizable spike in people reporting issues with checkout, search, and logging in. The problem seemed to be affecting both the site and the mobile apps. But an Amazon spokesperson tells The Verge that the issues are now fixed.

“We’re sorry that some customers may have temporarily experienced issues while shopping,” Amazon spokesperson Jennie Bryant says in a statement. “We have resolved the issue, which was related to a software code deployment, and website and app are now running smoothly.”

Several Verge staffers experienced issues themselves when there were problems. Clicking through to many products produced a “sorry, something went wrong” error, and even pages that did load were not showing pricing. Users reported being repeatedly logged out of their accounts when trying to check out or load their cart. Even the parts of Amazon.com that were working seem to be loading slowly.

The company has been dealing with AWS outages in Bahrain and the United Arab Emirates due to drone strikes by the Iranian military, but there has not been any word of more widespread outages in the US or elsewhere.

Update March 5th: Added comment from Amazon saying that things are fixed.

Advertisement
Continue Reading

Technology

$163K in fake medical bill charges; AI uncovers it for you

Published

on

3K in fake medical bill charges; AI uncovers it for you

NEWYou can now listen to Fox News articles!

Last summer, a man’s brother-in-law suffered a fatal heart attack. The hospital bill for four hours of emergency care: $195,628.

The man’s sister-in-law was ready to pay it. He asked her to wait. He requested an itemized bill with CPT codes, the universal billing codes hospitals use, and fed the whole thing into Claude, an AI chatbot.

Within minutes, Claude found duplicate charges, services billed as “inpatient” even though the patient was never admitted, supply costs inflated by 500% to 2,300% above Medicare rates and charges for procedures that never happened. He cross-checked with ChatGPT. Both AIs agreed. He wrote a six-page letter citing every violation by name.

The hospital dropped the bill to $33,000. An 83% reduction. Zero medical training. A $20 app.

Advertisement

A man cross-checked a hospital bill with AI and got it reduced by some 83%. (Neil Godwin/Getty Images)

Your bill is probably wrong, too

That story sounds extreme. It’s not.

The Medical Billing Advocates of America estimates 3 out of 4 medical bills contain errors. The average hospital bill over $10,000 has roughly $1,300 in mistakes. And less than 1% of denied insurance claims are ever appealed. Hospitals and insurers are banking on the fact that you won’t check.

AI flips that equation. You don’t need to understand CPT codes or have a medical billing degree. You just need to paste.

You can use AI platforms, like ChatGPT, to spot errors or suspicious charges on medical bills. (Jaap Arriens/NurPhoto via Getty Images)

Advertisement

The 5-minute audit

Step 1: Call your provider and request an itemized bill with CPT codes. Not the summary. The full line-by-line breakdown. You’re legally entitled to this.

Step 2: Open ChatGPT, Claude, Grok or Gemini (free versions work) and paste this:

“I’m pasting my itemized medical bill below. Please: (1) Explain every charge in plain English, (2) Flag any duplicate or suspicious charges, (3) Compare each charge to average costs, (4) Identify billing code errors or bundling violations, and (5) Draft a dispute letter I can send to the billing department. Here’s my bill:”

Step 3: Paste your bill. The AI will translate every line and tell you what looks wrong.

WOMAN SAYS CHATGPT SAVED HER LIFE BY HELPING DETECT CANCER, WHICH DOCTORS MISSED

Advertisement

If the AI finds errors, call the billing department and ask for a supervisor. (iStock)

Step 4: If the AI finds errors (it probably will), call the billing department and ask for a supervisor. Reference the specific codes. Hospitals resolve disputes all the time when patients show up prepared.

Pro tip: Counterforce Health (counterforcehealth.org) is a free AI tool built specifically for insurance denial appeals. Worth bookmarking.

It’s time to give your medical bills a thorough examination. The AI will see you now.

Real talk. Everybody’s talking about AI. Nobody’s showing you what to actually DO with it. My new free newsletter, Splash of AI (SplashofAI.com), gives you one trick, one tool and one “wait, I can do THAT?” moment every single week. Five minutes. Plain English. The kind of stuff that saves you time, money or both. You’ll wonder how you got by without it.

Advertisement

Send this to someone who is staring at a medical bill they can’t make sense of. Forward this right now. Seriously. This could save them hundreds or even thousands of dollars, and it takes less time than making coffee.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Get tech-smarter. Starting today.

Kim Komando cuts through the tech noise so you don’t have to. Real advice. Zero jargon. Every single day.

Catch the national radio show on 500-plus stations, get the free daily newsletter, watch on YouTube or listen to the podcast wherever you get your shows. It’s all waiting at Komando.com.

Copyright 2026, WestStar Multimedia Entertainment. All rights reserved.

Advertisement

Related Article

ChatGPT could miss your serious medical emergency, new study suggests
Continue Reading

Technology

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Published

on

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Meta’s AI-powered smart glasses could be sending sensitive footage to human reviewers in Nairobi, Kenya, according to an investigation by the Swedish outlets Svenska Dagbladet and Göteborgs-Posten. The report, which was published last week, claims Meta contractors in Kenya have seen videos captured with the smart glasses that show “bathroom visits, sex and other intimate moments.”

So far, at least one proposed class action lawsuit accusing Meta of violating false advertising and privacy laws has emerged in response to Svenska Dagbladet’s reporting, citing the company’s claim that its smart glasses are designed for privacy:

By affirmatively claiming that the Glasses were designed to protect privacy, Meta assumed a duty to disclose material facts that would inform a reasonable consumer’s decision to purchase the product. Instead, Meta hid the alarming reality: that use of the AI features results in a stranger halfway around the world watching the most private moments of a person’s life.

The Nairobi-based contractors interviewed by Svenska Dagbladet are AI annotators, meaning they label images, text, or audio, with the goal of helping AI systems make sense of the data they’re training on. “We see everything — from living rooms to naked bodies,” one worker says, according to Svenska Dagbladet. “Meta has that type of content in its databases.”

A former Meta employee reportedly tells Svenska Dagbladet that faces in annotation data are blurred automatically, though workers in Kenya say this “does not always work as intended,” and some faces are still visible. Another person reportedly tells the outlet that a wearer’s bank cards are sometimes seen in the footage they review as well.

Meta’s Ray-Ban and Oakley smart glasses come with a built-in AI assistant capable of answering questions about what a user can see. The glasses have soared in popularity in recent years, despite growing concerns over privacy and surveillance.

Advertisement

EssilorLuxottica, the eyewear giant that Meta works with to develop the camera-equipped glasses, sold over 7 million of the AI-powered glasses in 2025 — more than tripling its sales in 2023 and 2024 combined. Last year, Meta made some changes to its privacy policy that keep Meta AI with camera use enabled on your glasses “unless you turn off ‘Hey Meta.’” It also stopped allowing wearers to opt out of storing their voice recordings in the cloud.

As reported by Svenska Dagbladet, the Kenya-based AI reviewers work with transcriptions as well, ensuring Meta AI provides the correct answer to the questions users ask aloud. In a statement to The Verge, Meta spokesperson Tracy Clayton says media captured by its smart glasses “stays on the user’s device” unless they choose to share it with other people or Meta.

“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do,” Clayton says. “We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”

Continue Reading

Trending