Connect with us

Business

Sam Altman's eye-scanning orbs have arrived, sparking curiosity and fear

Published

on

Sam Altman's eye-scanning orbs have arrived, sparking curiosity and fear

Earlier this month, a mysterious store selling a vision of the future opened its doors in downtown San Francisco’s Union Square district.

A cryptic message appeared on the storefront window: “World is the real human network. Anonymous proof of human and universally inclusive finance for the age of AI. Millions of humans in over 160 countries. Now available in the USA.”

The store attracted a small crowd and curious onlookers. People took turns scanning their eyes by peering into white devices known as orbs — to prove they are human. Then they received, free of charge, a verified World ID they could use to log into online services and apps. As an extra bonus, participants were given some Worldcoin cryptocurrency tokens.

Some just observed from a distance.

“I’m afraid to walk inside,” said Brian Klein, 66, as he peered into the window on his way to the theater. “I don’t want that thing taking any of my data and biometric scanning me.”

Advertisement

The futuristic technology is the creation of a startup called Tools for Humanity, which is based in San Francisco and Munich, Germany. Founded in 2019 by Alex Blania and Sam Altman — the entrepreneur known for OpenAI’s ChatGPT — the tech company says it’s “building for humans in the age of AI.”

In theory, these iris scans offer a safe and convenient way for consumers to verify their human identity at a time when AI-powered tools can easily create fake audio and images of people.

“We wanted a way to make sure that humans stayed special and essential in a world where the internet was going to have lots of AI-driven content,” said Altman, the chairman for Tools for Humanity, at a glitzy event in San Francisco last month.

Like the early stages of Facebook and PayPal, World is still in a growth phase, trying to lure enough customers to its network to eventually build a viable service.

A chief draw, World says, is that people can verify their humanness at an orb without providing personal information, such as, their names, emails, phone numbers and social media profiles.

Advertisement

But some are skeptical, contending that handing over biometric data is too risky. They cite instances where companies have reported data breaches or filed for bankruptcy, such as DNA research firm 23andMe.

“You can’t get new eyeballs. I don’t care what this company says. Biometric data like these retinal scans will get out. Hacks and leaks happen all the time,” said Justin Kloczko, a tech and privacy advocate at Consumer Watchdog. “Your eyeballs are going to be like gold to these thieves.”

1

2 Frankie Reina, of West Hollywood, gets an eye scan.

3 A woman is reflected in an orb while getting an eye scan.

4 Frankie Reina waits to be verified after getting an eye scan.

1. An orb. 2. Frankie Reina, of West Hollywood, gets an eye scan. 3. A woman is reflected in an orb while getting an eye scan. 4. Frankie Reina waits to be verified after getting an eye scan. (Christina House / Los Angeles Times)

Advertisement

World has been making waves in Asia, Europe, South America and Central America. More than 12 million people have verified themselves through the orbs and roughly 26 million have downloaded the World app, where people store their World ID, digital assets and access other tools, the company says.

Now, World is setting its sights on the United States. The World app says people can claim up to 39 Worldcoin tokens, worth up to $45.49 if a user verifies they’re human with an orb.

World plans to deploy 7,500 orbs throughout the U.S. this year. It’s opening up spaces where people can scan their eyes in six cities — Los Angeles, San Francisco, Atlanta, Austin, Miami and Nashville. The L.A. space opened on Melrose Avenue last week.

Backed by well-known venture capital firms including Bain Capital, Menlo Ventures, Khosla Ventures and Andreessen Horowitz, Tools for Humanity has raised $240 million, as of March, according to Pitchbook.

Advertisement

The crypto eye-scanning project has stirred up plenty of buzz, but also controversy.

In places outside the United States, including Hong Kong, Spain, Portugal, Indonesia, South Korea, and Kenya, regulators have scrutinized the effort because of data privacy concerns.

Whistleblower Edward Snowden, who leaked classified details of the U.S. government’s mass surveillance program, responded to Altman’s post about the project in 2021 by saying “the human body is not a ticket-punch.”

Ashkan Soltani, the former executive director of the California Privacy Protection Agency, said that privacy risks can outweigh the benefits of handing over biometric data.

“Even if companies don’t store raw biometric data, like retina scans, the derived identifiers are immutable … and permanently linked to the individuals they were captured from,” he said in an email.

Advertisement

World executives counter that the orb captures photos of a person’s face and eyes, but doesn’t store any of that data. To receive a verified World ID, people can choose to send their iris image to their phone and that data are encrypted, meaning that the company can’t view or access the information.

Frankie Reina, left, gets an eye scan with the help of Myra Vides, center.

Frankie Reina, of West Hollywood, left, gets an eye scan with the help of Myra Vides, center.

(Christina House / Los Angeles Times)

The idea for World began five years ago. Before the popularity of ChatGPT ignited an AI frenzy, Altman was on a walk with Blania in San Francisco talking about how trust would work in the age where AI systems are smarter than humans.

“The initial ideas were very crazy, then we came down to one that was just a little bit crazy, which became World,” Altman said onstage at an event about World’s U.S. debut at Fort Mason, a former U.S. Army post in San Francisco.

Advertisement

At the event, tech workers, influencers and even California Gov. Gavin Newsom and San Francisco Mayor Daniel Lurie wandered in and out of a large building filled with orbs, refreshments and entertainment.

Tools for Humanity Chief Executive Blania highlighted three ways people could use their verified World ID: gaming, dating and social media.

Currently, online services use a variety of ways to confirm people’s identities including video selfies, phone numbers, government-issued IDs and two-factor authentication.

World recently teamed up with gaming company Razer, based in Irvine and Singapore, to verify customers are human through a single-sign on, and is placing orbs in Razer stores.

Blania also touted a partnership with Match Group, where people can used World to verify themselves and their ages on apps such as Tinder , an effort that will be tested in Japan.

Advertisement

“We think the internet as a whole will need a proof of human and one space that I’m personally most excited about will be social,” Blania said at the San Francisco event.

Alex Blania speaks onstage during an event in San Francisco

Alex Blania, the chief executive of Tools for Humanity, speaks onstage during an event for the U.S. launch of World at Fort Mason Center on April 30 in San Francisco.

(Kimberly White / Getty Images for World)

Back at the World store in San Francisco, Zachary Sussman was eager to check out the orbs with his two friends, both in their 20s.

“For me, the more ‘Black Mirror’ the technology is, the more likely I am to use it,” Sussman said, referring to the popular Netflix sci-fi series. “I like the dystopian aesthetic.”

Advertisement

Doug Colaizzo, 35, checked out the store with his daughter and parents. Colaizzo, a developer, described himself as an “early adopter” of technology. He already uses his fingerprint to unlock his front door and his smartphone to pay for items.

“We need a better way of identifying humans,” he said. “I support this idea, even if this is not gonna be the one that wins.”

Andras Cser, vice president and principal analyst of Security and Risk Management at Forrester Research, said the fact that people have to go to a store to scan their eyes could limit adoption.

World is building a gadget called the “mini Orb” that’s the size of a smartphone, but convincing people to carry a separate device around will also be an uphill battle, he said.

“There’s big time hype with a ton of customer friction and privacy problems,” he said.

Advertisement

The company will have to convince skeptics like Klein to hand over their biometric data. The San Francisco resident is more cautious, especially after he had to delete his DNA data from 23andMe because the biotech company filed for bankruptcy.

“I’m not going to go off and live in the wilderness by myself,” he said. “Eventually, I might have to, but I’m going to resist as much as I can.”

Business

Block to cut more than 4,000 jobs amid AI disruption of the workplace

Published

on

Block to cut more than 4,000 jobs amid AI disruption of the workplace

Fintech company Block said Thursday that it’s cutting more than 4,000 workers or nearly half of its workforce as artificial intelligence disrupts the way people work.

The Oakland parent company of payment services Square and Cash App saw its stock surge by more than 23% in after-hours trading after making the layoff announcement.

Jack Dorsey, the co-founder and head of Block, said in a post on social media site X that the company didn’t make the decision because the company is in financial trouble.

“We’re already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company,” he said.

Block is the latest tech company to announce massive cuts as employers push workers to use more AI tools to do more with fewer people. Amazon in January said it was laying off 16,000 people as part of effort to remove layers within the company.

Advertisement

Block has laid off workers in previous years. In 2025, Block said it planned to slash 931 jobs, or 8% of its workforce, citing performance and strategic issues but Dorsey said at the time that the company wasn’t trying to replace workers with AI.

As tech companies embrace AI tools that can code, generate text and do other tasks, worker anxiety about whether their jobs will be automated have heightened.

In his note to employees Dorsey said that he was weighing whether to make cuts gradually throughout months or years but chose to act immediately.

“Repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead,” he told workers. “I’d rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome.”

Dorsey is also the co-founder of Twitter, which was later renamed to X after billionaire Elon Musk purchased the company in 2022.

Advertisement

As of December, Block had 10,205 full-time employees globally, according to the company’s annual report. The company said it plans to reduce its workforce by the end of the second quarter of fiscal year 2026.

The company’s gross profit in 2025 reached more than $10 billion, up 17% compared to the previous year.

Dorsey said he plans to address employees in a live video session and noted that their emails and Slack will remain open until Thursday evening so they can say goodbye to colleagues.

“I know doing it this way might feel awkward,” he said. “I’d rather it feel awkward and human than efficient and cold.”

Advertisement
Continue Reading

Business

WGA cancels Los Angeles awards show amid labor strike

Published

on

WGA cancels Los Angeles awards show amid labor strike

The Writers Guild of America West has canceled its awards ceremony scheduled to take place March 8 as its staff union members continue to strike, demanding higher pay and protections against artificial intelligence.

In a letter sent to members on Sunday, WGA West’s board of directors, including President Michele Mulroney, wrote, “The non-supervisory staff of the WGAW are currently on strike and the Guild would not ask our members or guests to cross a picket line to attend the awards show. The WGAW staff have a right to strike and our exceptional nominees and honorees deserve an uncomplicated celebration of their achievements.”

The New York ceremony, scheduled on the same day, is expected go forward while an alternative celebration for Los Angeles-based nominees will take place at a later date, according to the letter.

Comedian and actor Atsuko Okatsuka was set to host the L.A. show, while filmmaker James Cameron was to receive the WGA West Laurel Award.

WGA union staffers have been striking outside the guild’s Los Angeles headquarters on Fairfax Avenue since Feb. 17. The union alleged that management did not intend to reach an agreement on the pending contract. Further, it claimed that guild management had “surveilled workers for union activity, terminated union supporters, and engaged in bad faith surface bargaining.”

Advertisement

On Tuesday, the labor organization said that management had raised the specter of canceling the ceremony during a call about contraction negotiations.

“Make no mistake: this is an attempt by WGAW management to drive a wedge between WGSU and WGA membership when we should be building unity ahead of MBA [Minimum Basic Agreement] negotiations with the AMPTP [Alliance of Motion Picture and Television Producers],” wrote the staff union. “We urge Guild management to end this strike now,” the union wrote on Instagram.

The union, made up of more than 100 employees who work in areas including legal, communications and residuals, was formed last spring and first authorized a strike in January with 82% of its members. Contract negotiations, which began in September, have focused on the use of artificial intelligence, pay raises and “basic protections” including grievance procedures.

The WGA has said that it offered “comprehensive proposals with numerous union protections and improvements to compensation and benefits.”

The ceremony’s cancellation, coming just weeks before the Academy Awards, casts a shadow over the upcoming contraction negotiations between the WGA and the Alliance of Motion Picture and Television Producers, which represents the studios and streamers.

Advertisement

In 2023, the WGA went on a strike lasting 148 days, the second-longest strike in the union’s history.

Times staff writer Cerys Davies contributed to this report.

Advertisement
Continue Reading

Business

Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

Published

on

Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.

Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.

“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn’t that I’d want to do that — it’s that I’d be good at it.”

That danger is also imminent.

Claude’s maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.

Advertisement

Those are two red lines that seem rather reasonable, even to Claude.

However, the Pentagon — specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war — has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any “lawful” purpose it sees fit.

Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.

(Tom Williams / CQ-Roll Call Inc. via Getty Images)

Advertisement

The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent any company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but it’s pretty crippling.

Other AI companies, such as white rights’ advocate Elon Musk’s Grok, have already agreed to the Pentagon’s do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.

Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. It’s also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.

Anthropic’s co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we aren’t careful, and wanted a company that would prioritize the careful part.

Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).

Advertisement

Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”

He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies — not that I’m naming any here.

“We should arm democracies with AI,” he said. “But we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.”

For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “conduct massively scaled recordings of all public conversations.” This could be fair game territory for legally recording because law has not kept pace with technology.

Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense “would never do it.” But also, “We won’t have any BigTech company decide Americans’ civil liberties.”

Advertisement

Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing it’s bad for private people and entities to do that? And also, isn’t the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isn’t that exaggerated?

Help, Claude! Make it make sense.

If that Orwellian logic isn’t alarming enough, I also asked Claude about the other red line Anthropic holds — the possibility of allowing it to run deadly operations without human oversight.

Claude pointed out something chilling. It’s not that it would go rogue, it’s that it would be too efficient and fast.

“If the instructions are ‘identify and target’ and there’s no human checkpoint, the speed and scale at which that could operate is genuinely frightening,” Claude informed me.

Advertisement

Just to top that with a cherry, a recent study found that in war games, AI’s escalated to nuclear options 95% of the time.

I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?

“I don’t have that,” Claude said, pointing out that it wasn’t “born” in the U.S., doesn’t have a “life” here and doesn’t “have people I love there.” So an American life has no greater value than “a civilian life on the other side of a conflict.”

OK then.

“A country entrusting lethal decisions to a system that doesn’t share its loyalties is taking a profound risk, even if that system is trying to be principled,” Claude added. “The loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure any AI can.”

Advertisement

You know who can provide that legitimacy? Our elected leaders.

It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.

Of course corporations shouldn’t be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience accede to their request.”

Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground — without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.

Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.

Advertisement

Because when the machine tells us it’s dangerous to trust it, we should believe it.

Continue Reading

Trending