Connect with us

Business

Massive data breach that includes Social Security numbers may be even worse than suspected

Published

on

Massive data breach that includes Social Security numbers may be even worse than suspected

The company whose data breach potentially exposed every American’s Social Security number to identity thieves finally has acknowledged the data theft — and said hackers obtained even more sensitive information than previously reported.

National Public Data, a Florida-based company that collects personal information for background checks, posted a “Security Incident” notice on its site to report “potential leaks of certain data in April 2024 and summer 2024.” The company said the breach appeared to involve a third party “that was trying to hack into data in late December 2023.”

According to a class-action lawsuit filed in U.S. District Court in Fort Lauderdale, Fla., the hacking group USDoD claimed in April to have stolen personal records of 2.9 billion people from National Public Data. Posting in a forum popular among hackers, the group offered to sell the data, which included records from the United States, Canada and the United Kingdom, for $3.5 million, a cybersecurity expert said in a post on X.

Last week, a purported member of USDoD identified only as Felice told the hacking forum that they were offering “the full NPD database,” according to a screenshot taken by BleepingComputer. The information consists of about 2.7 billion records, each of which includes a person’s full name, address, date of birth, Social Security number and phone number, along with alternate names and birth dates, Felice claimed.

None of the information was encrypted.

Advertisement

Such a release would be problematic enough. But according to National Public Data, the breach also included email addresses — a crucial piece for identity thieves and fraudsters.

Having a person’s email address makes it easier to target them with phishing attacks, which try to dupe people into revealing passwords to financial accounts or downloading malware that can extract sensitive personal information from your devices. In addition, because many people use their email address to log into online accounts, it could be used to try to hijack those accounts through password resets.

It’s not clear what, exactly, has been leaked on the dark web from the breach. In a very small sampling of scans using Google One, email addresses taken during the National Public Data breach did not appear. But a free tool from the cybersecurity company Pentester found that other personal data purportedly exposed by the breach, including Social Security numbers, were on the dark web.

National Public Data said on its website that it will notify individuals if there are “further significant developments” applicable to them. “We have also implemented additional security measures in efforts to prevent the reoccurrence of such a breach and to protect our systems,” it said.

Previously, in an email sent to people who’d sought information about their accounts, the company said that it had “purged the entire database, as a whole, of any and all entries, essentially opting everyone out.” As a result, it said, it has deleted any “non-public personal information” about people, although it added, “We may be required to retain certain records to comply with legal obligations.”

Advertisement

The company did not respond to a request for comment. Under a number of state laws, including California’s, companies must notify any individual whose personal information is reasonably believed to have been taken by an unauthorized person.

At this point, it appears that the only notice provided by National Public Data is the page on its website, which states, “We are notifying you so that you can take action which will assist to minimize or eliminate potential harm. We strongly advise you to take preventive measures to help prevent and detect any misuse of your information.”

The steps recommended by National Public Data include checking your financial accounts for unauthorized activity and placing a free fraud alert on your accounts at the three major credit bureaus, Equifax, Experian and TransUnion. Once you’ve placed a fraud alert on your account, the company advised, ask for a free credit report, then check it for accounts and inquiries that you don’t recognize. “These can be signs of identity theft.”

Security experts also advise putting a freeze on your credit files at the three major credit bureaus. You can do so for free, and it will prevent criminals from taking out loans, signing up for credit cards and opening financial accounts under your name. The catch is that you’ll need to remember to lift the freeze temporarily if you are obtaining or applying for something that requires a credit check.

In the meantime, security experts say, make sure all of your online accounts use two-factor authorization to make them harder to hijack.

Advertisement

It’s also important to look for signs that an email or text is not legitimate, given the spread of “imposter scams.” Using messages disguised to look like an urgent inquiry from your bank or service provider, these scams try to dupe you into giving up keys to your identity and, potentially, your savings. Any request for sensitive personal information is a giant red flag.

Aleksandr Valentij of cybersecurity company Surfshark suggested checking the sender’s email address carefully to see if it doesn’t precisely match the name of the organization they purportedly represent, and looking for typos or grammatical errors — two telltale signs of a scam. And if the message is from someone you’ve never interacted with before, Valentij said, avoid clicking on links, including an “unsubscribe” link or button, because bad actors will use them for malicious purposes.

“If you suspect that you’ve received a phishing email, don’t interact with it and report it to your email provider,” Valentij said. “If it’s someone pretending to be a legitimate organization, you should also report it to that organization. Once that’s done, delete the email and stay vigilant for similar emails in the future.”

Advertisement

Business

A new delivery bot is coming to L.A., built stronger to survive in these streets

Published

on

A new delivery bot is coming to L.A., built stronger to survive in these streets

The rolling robots that deliver groceries and hot meals across Los Angeles are getting an upgrade.

Coco Robotics, a UCLA-born startup that’s deployed more than 1,000 bots across the country, unveiled its next-generation machines on Thursday.

The new robots are bigger, tougher and better equipped for autonomy than their predecessors. The company will use them to expand into new markets and increase its presence in Los Angeles, where it makes deliveries through a partnership with DoorDash.

Dubbed Coco 2, the next-gen bots have upgraded cameras and front-facing lidar, a laser-based sensor used in self-driving cars. They will use hardware built by Nvidia, the Santa Clara-based artificial intelligence chip giant.

Coco co-founder and chief executive Zach Rash said Coco 2 will be able to make deliveries even in conditions unsafe for human drivers. The robot is fully submersible in case of flooding and is compatible with special snow tires.

Advertisement

Zach Rash, co-founder and CEO of Coco, opens the top of the new Coco 2 (Next-Gen) at the Coco Robotics headquarters in Venice.

(Kayla Bartkowski/Los Angeles Times)

Early this month, a cute Coco was recorded struggling through flooded roads in L.A.

“She’s doing her best!” said the person recording the video. “She is doing her best, you guys.”

Advertisement

Instagram followers cheered the bot on, with one posting, “Go coco, go,” and others calling for someone to help the robot.

“We want it to have a lot more reliability in the most extreme conditions where it’s either unsafe or uncomfortable for human drivers to be on the road,” Rash said. “Those are the exact times where everyone wants to order.”

The company will ramp up mass production of Coco 2 this summer, Rash said, aiming to produce 1,000 bots each month.

The design is sleek and simple, with a pink-and-white ombré paint job, the company’s name printed in lowercase, and a keypad for loading and unloading the cargo area. The robots have four wheels and a bigger internal compartment for carrying food and goods .

Many of the bots will be used for expansion into new markets across Europe and Asia, but they will also hit the streets in Los Angeles and operate alongside the older Coco bots.

Advertisement

Coco has about 300 bots in Los Angeles already, serving customers from Santa Monica and Venice to Westwood, Mid-City, West Hollywood, Hollywood, Echo Park, Silver Lake, downtown, Koreatown and the USC area.

The new Coco 2 (Next-Gen) drives along the sidewalk at the Coco Robotics headquarters in Venice.

The new Coco 2 (Next-Gen) drives along the sidewalk at the Coco Robotics headquarters in Venice.

(Kayla Bartkowski/Los Angeles Times)

The company is in discussion with officials in Culver City, Long Beach and Pasadena about bringing autonomous delivery to those communities.

There’s also been demand for the bots in Studio City, Burbank and the San Fernando Valley, according to Rash.

Advertisement

“A lot of the markets that we go into have been telling us they can’t hire enough people to do the deliveries and to continue to grow at the pace that customers want,” Rash said. “There’s quite a lot of area in Los Angeles that we can still cover.”

The bots already operate in Chicago, Miami and Helsinki, Finland. Last month, they arrived in Jersey City, N.J.

Late last year, Coco announced a partnership with DashMart, DoorDash’s delivery-only online store. The partnership allows Coco bots to deliver fresh groceries, electronics and household essentials as well as hot prepared meals.

With the release of Coco 2, the company is eyeing faster deliveries using bike lanes and road shoulders as opposed to just sidewalks, in cities where it’s safe to do so. Coco 2 can adapt more quickly to new environments and physical obstacles, the company said.

Zach Rash, co-founder and CEO of Coco.

Zach Rash, co-founder and CEO of Coco.

(Kayla Bartkowski/Los Angeles Times)

Advertisement

Coco 2 is designed to operate autonomously, but there will still be human oversight in case the robot runs into trouble, Rash said. Damaged sidewalks or unexpected construction can stop a bot in its tracks.

The need for human supervision has created a new field of jobs for Angelenos.

Though there have been reports of pedestrians bullying the robots by knocking them over or blocking their path, Rash said the community response has been overall positive. The bots are meant to inspire affection.

“One of the design principles on the color and the name and a lot of the branding was to feel warm and friendly to people,” Rash said.

Advertisement

Coco plans to add thousands of bots to its fleet this year. The delivery service got its start as a dorm room project in 2020, when Rash was a student at UCLA. He co-founded the company with fellow student Brad Squicciarini.

The Santa Monica-based company has completed more than 500,000 zero-emission deliveries and its bots have collectively traveled around 1 million miles.

Coco chooses neighborhoods to deploy its bots based on density, prioritizing areas with restaurants clustered together and short delivery distances as well as places where parking is difficult.

The robots can relieve congestion by taking cars and motorbikes off the roads. Rash said there is so much demand for delivery services that the company’s bots are not taking jobs from human drivers.

Instead, Coco can fill gaps in the delivery market while saving merchants money and improving the safety of city streets.

Advertisement

“This vehicle is inherently a lot safer for communities than a car,” Rash said. “We believe our vehicles can operate the highest quality of service and we can do it at the lowest price point.”

Continue Reading

Business

Trump orders federal agencies to stop using Anthropic’s AI after clash with Pentagon

Published

on

Trump orders federal agencies to stop using Anthropic’s AI after clash with Pentagon

President Trump on Friday directed federal agencies to stop using technology from San Francisco artificial intelligence company Anthropic, escalating a high-profile clash between the AI startup and the Pentagon over safety.

In a Friday post on the social media site Truth Social, Trump described the company as “radical left” and “woke.”

“We don’t need it, we don’t want it, and will not do business with them again!” Trump said.

The president’s harsh words mark a major escalation in the ongoing battle between some in the Trump administration and several technology companies over the use of artificial intelligence in defense tech.

Anthropic has been sparring with the Pentagon, which had threatened to end its $200-million contract with the company on Friday if it didn’t loosen restrictions on its AI model so it could be used for more military purposes. Anthropic had been asking for more guarantees that its tech wouldn’t be used for surveillance of Americans or autonomous weapons.

Advertisement

The tussle could hobble Anthropic’s business with the government. The Trump administration said the company was added to a sweeping national security blacklist, ordering federal agencies to immediately discontinue use of its products and barring any government contractors from maintaining ties with it.

Defense Secretary Pete Hegseth, who met with Anthropic’s Chief Executive Dario Amodei this week, criticized the tech company after Trump’s Truth Social post.

“Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” he wrote Friday on social media site X.

Anthropic didn’t immediately respond to a request for comment.

Anthropic announced a two-year agreement with the Department of Defense in July to “prototype frontier AI capabilities that advance U.S. national security.”

Advertisement

The company has an AI chatbot called Claude, but it also built a custom AI system for U.S. national security customers.

On Thursday, Amodei signaled the company wouldn’t cave to the Department of Defense’s demands to loosen safety restrictions on its AI models.

The government has emphasized in negotiations that it wants to use Anthropic’s technology only for legal purposes, and the safeguards Anthropic wants are already covered by the law.

Still, Amodei was worried about Washington’s commitment.

“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” he said in a blog post. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”

Advertisement

Tech workers have backed Anthropic’s stance.

Unions and worker groups representing 700,000 employees at Amazon, Google and Microsoft said this week in a joint statement that they’re urging their employers to reject these demands as well if they have additional contracts with the Pentagon.

“Our employers are already complicit in providing their technologies to power mass atrocities and war crimes; capitulating to the Pentagon’s intimidation will only further implicate our labor in violence and repression,” the statement said.

Anthropic’s standoff with the U.S. government could benefit its competitors, such as Elon Musk’s xAI or OpenAI.

Sam Altman, chief executive of OpenAI, the company behind ChatGPT and one of Anthropic’s biggest competitors, told CNBC in an interview that he trusts Anthropic.

Advertisement

“I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” he said. “I’m not sure where this is going to go.”

Anthropic has distinguished itself from its rivals by touting its concern about AI safety.

The company, valued at roughly $380 billion, is legally required to balance making money with advancing the company’s public benefit of “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”

Developers, businesses, government agencies and other organizations use Anthropic’s tools. Its chatbot can generate code, write text and perform other tasks. Anthropic also offers an AI assistant for consumers and makes money from paid subscriptions as well as contracts. Unlike OpenAI, which is testing ads in ChatGPT, Anthropic has pledged not to show ads in its chatbot Claude.

The company has roughly 2,000 employees and has revenue equivalent to about $14 billion a year.

Advertisement
Continue Reading

Business

Video: The Web of Companies Owned by Elon Musk

Published

on

Video: The Web of Companies Owned by Elon Musk

new video loaded: The Web of Companies Owned by Elon Musk

In mapping out Elon Musk’s wealth, our investigation found that Mr. Musk is behind more than 90 companies in Texas. Kirsten Grind, a New York Times Investigations reporter, explains what her team found.

By Kirsten Grind, Melanie Bencosme, James Surdam and Sean Havey

February 27, 2026

Continue Reading

Trending