Business
A.I. Brings the Robot Wingman to Aerial Combat

It is powered into flight by a rocket engine. It can fly a distance equal to the width of China. It has a stealthy design and is capable of carrying missiles that can hit enemy targets far beyond its visual range.
But what really distinguishes the Air Force’s pilotless XQ-58A Valkyrie experimental aircraft is that it is run by artificial intelligence, putting it at the forefront of efforts by the U.S. military to harness the capacities of an emerging technology whose vast potential benefits are tempered by deep concerns about how much autonomy to grant to a lethal weapon.
Essentially a next-generation drone, the Valkyrie is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle. Its mission is to marry artificial intelligence and its sensors to identify and evaluate enemy threats and then, after getting human sign-off, to move in for the kill.
On a recent day at Eglin Air Force Base on Florida’s Gulf Coast, Maj. Ross Elder, 34, a test pilot from West Virginia, was preparing for an exercise in which he would fly his F-16 fighter alongside the Valkyrie.
“It’s a very strange feeling,” Major Elder said, as other members of the Air Force team prepared to test the engine on the Valkyrie. “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.”
The Valkyrie program provides a glimpse into how the U.S. weapons business, military culture, combat tactics and competition with rival nations are being reshaped in possibly far-reaching ways by rapid advances in technology.
The emergence of artificial intelligence is helping to spawn a new generation of Pentagon contractors who are seeking to undercut, or at least disrupt, the longstanding primacy of the handful of giant firms who supply the armed forces with planes, missiles, tanks and ships.
The possibility of building fleets of smart but relatively inexpensive weapons that could be deployed in large numbers is allowing Pentagon officials to think in new ways about taking on enemy forces.
It also is forcing them to confront questions about what role humans should play in conflicts waged with software that is written to kill, a question that is especially fraught for the United States given its record of errant strikes by conventional drones that inflict civilian casualties.
And gaining and maintaining an edge in artificial intelligence is one element of an increasingly open race with China for technological superiority in national security.
That is where the new generation of A.I. drones, known as collaborative combat aircraft, will come in. The Air Force is planning to build 1,000 to 2,000 of them for as little as $3 million apiece, or a fraction of the cost of an advanced fighter, which is why some at the Air Force call the program “affordable mass.”
There will be a range of specialized types of these robot aircraft. Some will focus on surveillance or resupply missions, others will fly in attack swarms and still others will serve as a “loyal wingman” to a human pilot.
The drones, for example, could fly in front of piloted combat aircraft, doing early, high-risk surveillance. They could also play a major role in disabling enemy air defenses, taking risks to knock out land-based missile targets that would be considered too dangerous for a human-piloted plane.
The A.I. — a more sophisticated version of the type of programming now best known for powering chat bots — would assemble and evaluate information from its sensors as it approaches enemy forces to identify other threats and high-value targets, asking the human pilot for authorization before launching any attack with its bombs or missiles.
The cheapest ones will be considered expendable, meaning they likely will only have one mission. The more sophisticated of these robot aircraft might cost as much as $25 million, according to an estimate by the House of Representatives, still far less than a piloted fighter jet.
“Is it a perfect answer? It is never a perfect answer when you look into the future,” said Maj. Gen. R. Scott Jobe, who until this summer was in charge of setting requirements for the air combat program, as the Air Force works to incorporate A.I. into its fighter jets and drones.
“But you can present potential adversaries with dilemmas — and one of those dilemmas is mass,” General Jobe said in an interview at the Pentagon, referring to the deployment of large numbers of drones against enemy forces. “You can bring mass to the battle space with potentially fewer people.”
The effort represents the beginning of a seismic shift in the way the Air Force buys some of its most important tools. After decades in which the Pentagon has focused on buying hardware built by traditional contractors like Lockheed Martin and Boeing, the emphasis is shifting to software that can enhance the capabilities of weapons systems, creating an opening for newer technology firms to grab pieces of the Pentagon’s vast procurement budget.
“Machines are actually drawing on the data and then creating their own outcomes,” said Brig. Gen. Dale White, the Pentagon official who has been in charge of the new acquisition program.
The Air Force realizes it must also confront deep concerns about military use of artificial intelligence, whether fear that the technology might turn against its human creators (like Skynet in the “Terminator” film series) or more immediate misgivings about allowing algorithms to guide the use of lethal force.
“You’re stepping over a moral line by outsourcing killing to machines — by allowing computer sensors rather than humans to take human life,” said Mary Wareham, the advocacy director of the arms division of Human Rights Watch, which is pushing for international limits on so-called lethally autonomous weapons.
A recently revised Pentagon policy on the use of artificial intelligence in weapons systems allows for the autonomous use of lethal force — but any particular plan to build or deploy such a weapon must first be reviewed and approved by a special military panel.
Asked if Air Force drones might eventually be able to conduct lethal strikes like this without explicit human sign-off on each attack, a Pentagon spokeswoman said in a statement to The New York Times that the question was too hypothetical to answer.
Any autonomous Air Force drone, the statement said, would have to be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
Air Force officials said they fully understand that machines are not intelligent in the same way humans are. A.I. technology can also make mistakes — as has happened repeatedly in recent years with driverless cars — and machines have no built-in moral compass. The officials said they were considering those factors while building the system.
“It is an awesome responsibility,” said Col. Tucker Hamilton, the Air Force chief of A.I. Test and Operations, who also helps oversee the flight-test crews at Eglin Air Force Base, noting that “dystopian storytelling and pop culture has created a kind of frenzy” around artificial intelligence.
“We just need to get there methodically, deliberately, ethically — in baby steps,” he said.
The Pentagon Back Flip
The long, wood-paneled corridor in the Pentagon where the Air Force top brass have their offices is lined with portraits of a century’s worth of leaders, mixed with images of the flying machines that have given the United States global dominance in the air since World War II.
A common theme emerges from the images: the iconic role of the pilot.
Humans will continue to play a central role in the new vision for the Air Force, top Pentagon officials said, but they will increasingly be teamed with software engineers and machine learning experts, who will be constantly refining algorithms governing the operation of the robot wingmen that will fly alongside them.
Almost every aspect of Air Force operations will have to be revised to embrace this shift. It’s a task that through this summer had been largely been entrusted to Generals White and Jobe, whose partnership Air Force officers nicknamed the Dale and Frag Show (General Jobe’s call sign as a pilot is Frag).
The Pentagon, through its research divisions like DARPA and the Air Force Research Laboratory, has already spent several years building prototypes like the Valkyrie and the software that runs it. But the experiment is now graduating to a so-called program of record, meaning if Congress approves, substantial taxpayer dollars will be allocated to buying the vehicles: a total of $5.8 billion over the next five years, according to the Air Force plan.
Unlike F-35 fighter jets, which are delivered as a package by Lockheed Martin and its subcontractors, the Air Force is planning to split up the aircraft and the software as separate purchases.
Kratos, the builder of the Valkyrie, is already preparing to bid on any future contract, as are other major companies such as General Atomics, which for years has built attack drones used in Iraq and Afghanistan, and Boeing, which has its own experimental autonomous fighter jet prototype, the MQ-28 Ghost Bat.
A separate set of software-first companies — tech start-ups such as Shield AI and Anduril that are funded by hundreds of millions of dollars in venture capital — are vying for the right to sell the Pentagon the artificial intelligence algorithms that will handle mission decisions.
The list of hurdles that must be cleared is long.
The Pentagon has a miserable record on building advanced software and trying to start its own artificial intelligence program. Over the years, it has cycled through various acronym-laden program offices that are created and then shut down with little to show.
There is constant turnover among leaders at the Pentagon, complicating efforts to keep moving ahead on schedule. General Jobe has already been assigned to a new role and General White soon will be.
The Pentagon also is going to need to disrupt the iron-fisted control that the major defense contractors have on the flow of military spending. As the structure of the Valkyrie program suggests, the military wants to do more to harness the expertise of a new generation of software companies to deliver key parts of the package, introducing more competition, entrepreneurial speed and creativity into what has long been a risk-averse and slow-moving system.
The most important job, at least until recently, rested with General Jobe, who first made a name for himself in the Air Force two decades ago when he helped devise a bombing strategy to knock out deeply buried bunkers in Iraq that held critical military communication switches.
He was asked to make key decisions setting the framework for how the A.I.-powered robot airplanes will be built. During a Pentagon interview, and at other recent events, Generals Jobe and White both said one clear imperative is that humans will remain the ultimate decision makers — not the robot drones, known as C.C.A.s, the acronym for collaborative combat aircraft.
“I’m not going to have this robot go out and just start shooting at things,” General Jobe said during a briefing with Pentagon reporters late last year.
He added that a human would always be deciding when and how to have an A.I.-enabled aircraft engage with an enemy and that developers are building a firewall around certain A.I. functions to limit what the devices will be able to do on their own.
“Think of it as just an extension to your weapons bay if you’re in an F-22, F-35 or whatnot,” he said.
Back in 1947, Chuck Yeager, then a young test pilot from Myra, W. Va., became the first human to fly faster than the speed of sound.
Seventy-six years later, another test pilot from West Virginia has become one of the first Air Force pilots to fly alongside an autonomous, A.I.-empowered combat drone.
Tall and lanky, with a slight Appalachian accent, Major Elder last month flew his F-15 Strike Eagle within 1,000 feet of the experimental XQ-58A Valkyrie — watching closely, like a parent running alongside a child learning how to ride a bike, as the drone flew on its own, reaching certain assigned speeds and altitudes.
The basic functional tests of the drone were just the lead-up to the real show, where the Valkyrie gets beyond using advanced autopilot tools and begins testing the war-fighting capabilities of its artificial intelligence. In a test slated for later this year, the combat drone will be asked to chase and then kill a simulated enemy target while out over the Gulf of Mexico, coming up with its own strategy for the mission.
During the current phase, the goal is to test the Valkyrie’s flight capacity and the A.I. software, so the aircraft is not carrying any weapons. The planned dogfight will be with a “constructed” enemy, although the A.I. agent onboard the Valkyrie will believe it is real.
Major Elder had no way to communicate directly with the autonomous drone at this early stage of development, so he had to watch very carefully as it set off on its mission.
“It wants to kill and survive,” Major Elder said of the training the drone has been given.
An unusual team of Air Force officers and civilians has been assembled at Eglin, which is one of the largest Air Force bases in the world. They include Capt. Rachel Price from Glendale, Az., who is wrapping up a Ph.D. at the Massachusetts Institute of Technology on computer deep learning, as well as Maj. Trent McMullen from Marietta, Ga., who has a master’s degree in machine learning from Stanford University.
One of the things Major Elder watches for is any discrepancies between simulations run by computer before the flight and the actions by the drone when it is actually in the air — a “sim to real” problem, they call it — or even more worrisome, any sign of “emergent behavior,” where the robot drone is acting in a potentially harmful way.
During test flights, Major Elder or the team manager in the Eglin Air Force Base control tower can power down the A.I. platform while keeping the basic autopilot on the Valkyrie running. So can Capt. Abraham Eaton of Gorham, Maine, who serves as a flight test engineer on the project and is charged with helping evaluate the drone’s performance.
“How do you grade an artificial intelligence agent?” he asked rhetorically. “Do you grade it on a human scale? Probably not, right?”
Real adversaries will likely try to fool the artificial intelligence, for example by creating a virtual camouflage for enemy planes or targets to make the robot believe it is seeing something else.
The initial version of the A.I. software is more “deterministic,” meaning it is largely following scripts that it has been trained with, based on computer simulations the Air Force has run millions of times as it builds the system. Eventually, the A.I. software will have to be able to perceive the world around it — and learn to understand these kinds of tricks and overcome them, skills that will require massive data collection to train the algorithms. The software will have to be heavily protected against hacking by an enemy.
The hardest part of this task, Major Elder and other pilots said, is the vital trust building that is such a central element of the bond between a pilot and wingman — their lives depend on each other, and how each of them react. It is a concern back at the Pentagon too.
“I need to know that those C.C.A.s are going to do what I expect them to do, because if they don’t, it could end badly for me,” General White said.
In early tests, the autonomous drones already have shown that they will act in unusual ways, with the Valkyrie in one case going into a series of rolls. At first, Major Elder thought something was off, but it turned out that the software had determined that its infrared sensors could get a clearer picture if it did continuous flips. The maneuver would have been like a stomach-turning roller coaster ride for a human pilot, but the team later concluded the drone had achieved a better outcome for the mission.
Air Force pilots have experience with learning to trust computer automation — like the collision avoidance systems that take over if a fighter jet is headed into the ground or set to collide with another aircraft — two of the leading causes of death among pilots.
The pilots were initially reluctant to go into the air with the system engaged, as it would allow computers to take control of the planes, several pilots said in interviews. As evidence grew that the system saved lives, it was broadly embraced. But learning to trust robot combat drones will be an even bigger hurdle, senior Air Force officials acknowledged.
Air Force officials used the word “trust” dozens of times in a series of interviews about the challenges they face in building acceptance among pilots. They have already started flying the prototype robot drones with test pilots nearby, so they can get this process started.
The Air Force has also begun a second test program called Project Venom that will put pilots in six F-16 fighter jets equipped with artificial intelligence software that will handle key mission decisions.
The goal, Pentagon officials said, is an Air Force that is more unpredictable and lethal, creating greater deterrence for any moves by China, and a less deadly fight, at least for the United States Air Force.
Officials estimate that it could take five to 10 years to develop a functioning A.I.-based system for air combat. Air Force commanders are pushing to accelerate the effort — but recognize that speed cannot be the only objective.
“We’re not going to be there right away, but we’re going to get there,” General Jobe said. “It’s advanced and getting better every day as you continue to train these algorithms.”

Business
Avelo Airlines Faces Backlash for Aiding Trump’s Deportation Campaign

In the four years since its first flight, Avelo Airlines has gained loyal customers by serving smaller cities like New Haven, Conn., and Burbank, Calif.
Now, it has a new, very different line of business. It is running deportation flights for the Trump administration.
Despite weeks of protests from customers and elected officials, Avelo’s first flight for Immigrations and Customs Enforcement appears to have departed on Monday morning from Mesa, Ariz., according to data from the flight-tracking services FlightAware and Flightradar24.
According to FlightAware, the plane is expected to arrive in the early afternoon at Alexandria International Airport in Louisiana, one of five locations where ICE conducts regular flights. Avelo declined to comment on the flight and ICE did not respond to multiple requests for comment.
The airline’s decision to support President Trump’s effort to accelerate deportations of immigrants is unusual and risky. ICE outsources many flights, but they are usually operated by little-known charter airlines. Commercial carriers typically avoid this kind of work so as not to wade into politics and upset customers or employees.
The risks for Avelo are perhaps even greater because a large proportion of its flights either land or take off from cities where most people are progressives or centrists who are much less likely to support Mr. Trump’s hard-line immigration policies. More than 90 percent of the airline’s flights arrived or departed from coastal states last year, according to Cirium, an aviation data firm. Nearly one in four flew to or from New Haven.
“This is really fraught, really risky,” said Alison Taylor, a professor at the New York University Stern School of Business who focuses on corporate ethics and responsibility. “The headlines and the general human aspect of this is not playing very well.”
But Avelo, which is backed by private investors and run by executives who came from larger airlines, is struggling financially.
The money the company stands to make from ICE flights is too good to pass up, the airline’s founder and chief executive, Andrew Levy, said last month in an internal email, a copy of which was reviewed by The New York Times. The flights, he said, would help to stabilize Avelo’s finances as the airline faced more competition, particularly in and near New Haven, which is home to Yale and where the airline operates more than a dozen flights a day.
“After extensive deliberations with our board of directors and our senior leaders, we concluded this new opportunity was too valuable not to pursue,” Mr. Levy wrote in the email on April 3, a day after Avelo signed the agreement with ICE.
While the military carries out some deportation flights, ICE relies heavily on private airlines. There is little public information about those flights, which ICE primarily arranges through a broker, CSI Aviation, said Tom Cartwright, a retired banking executive who has tracked the flights for years as a volunteer with Witness at the Border, an immigrants rights group. Most are operated by two small charter airlines, GlobalX Air and Eastern Air Express, he said.
GlobalX started operations in 2021 and conducts flights for the federal government, college basketball teams, casinos, tour operators and others. It has grown rapidly and brought in $220 million in revenue last year but is not yet profitable. This year, it has operated deportation flights to Brazil and El Salvador. Eastern Air Express is part of Eastern Airlines, a privately held company.
GlobalX and Eastern Airlines did not respond to requests for comment.
Contracts for such flights provide airlines consistent revenue, and the business is much less vulnerable to changes in economic conditions than conventional passenger flights. By Mr. Cartwright’s count, which is based on a variety of sources, ICE operated nearly 8,000 flights over the year that ended in April, most of them within the United States. CSI Aviation alone was awarded hundreds of millions of dollars in ICE contracts in recent years, according to federal data.
Avelo’s decision last month to join in on those flights was met with a swift backlash.
Within days of Mr. Levy’s internal announcement, the New Haven Immigrants Coalition, a collection of groups that support immigrants’ rights, started a campaign to pressure Avelo to drop the flights. An online petition started by the coalition has gained more than 37,000 signatures. Protests also sprouted up near airports in Connecticut, Delaware, California and Florida served by Avelo.
The Democratic governors of Connecticut and Delaware denounced Avelo, while lawmakers in Connecticut and New York released proposals to withdraw state support, including a tax break on jet fuel purchases, from companies that work with ICE.
William Tong, the Democratic attorney general of Connecticut, demanded answers of Mr. Levy, who deferred to the federal government. In a statement last month, Mr. Tong called Mr. Levy’s response “insulting and condescending.”
The Association of Flight Attendants-CWA, a union that represents flight attendants at 20 airlines, including Avelo, raised concerns. The union noted that immigrants being deported by the Trump administration had been placed in restraints, which can make flight attendants’ jobs much more difficult.
“Having an entire flight of people handcuffed and shackled would hinder any evacuation and risk injury or death,” the union said in a statement. “It also impedes our ability to respond to a medical emergency, fire on board, decompression, etc. We cannot do our jobs in these conditions.”
Avelo said that under its deal with ICE, it would operate flights within the United States and abroad, using three Boeing 737-800 jets. To handle those flights, the airline opened a base at Mesa Gateway Airport and started hiring pilots, flight attendants and other staff.
In a statement, Mr. Levy, a former top executive at United Airlines and Allegiant Air, said the airline had not entered into the contract lightly.
“We realize this is a sensitive and complicated topic,” he said. “After significant deliberations, we determined this charter flying will provide us with the stability to continue expanding our core scheduled passenger service and keep our more than 1,100 crew members employed for years to come.”
The airline, which is based in Houston, said it had operated similar flights for the Biden administration. “When our country calls, our practice is to say yes,” it said in a separate statement.
In the email last month, Mr. Levy celebrated the fact that Avelo had nearly broken even in 2024, losing just $500,000 on $310 million in revenue. But the airline needs to raise more money from investors, he said. Performance this year has suffered as national consumer confidence has waned, and the airline is facing rising competition.
Avelo was seeking revenue that would be “immune from these issues,” Mr. Levy said in the email, and pursued charter flights, including for the federal government. To accommodate the ICE flights, the airline also scaled back its presence at an airport in Santa Rosa, Calif.
Avelo has raised more than $190 million, most of it in 2020 and 2022, according to PitchBook. Mr. Levy’s email said the airline hoped to secure new funding this summer.
Business
Sam Altman's eye-scanning orbs have arrived, sparking curiosity and fear

SAN FRANCISCO — Earlier this month, a mysterious store selling a vision of the future opened its doors in downtown San Francisco’s Union Square district.
A cryptic message appeared on the storefront window: “World is the real human network. Anonymous proof of human and universally inclusive finance for the age of AI. Millions of humans in over 160 countries. Now available in the USA.”
The store attracted a small crowd and curious onlookers. People took turns scanning their eyes by peering into white devices known as orbs — to prove they are human. Then they received, free of charge, a verified World ID they could use to log into online services and apps. As an extra bonus, participants were given some Worldcoin cryptocurrency tokens.
Some just observed from a distance.
“I’m afraid to walk inside,” said Brian Klein, 66, as he peered into the window on his way to the theater. “I don’t want that thing taking any of my data and biometric scanning me.”
The futuristic technology is the creation of a startup called Tools for Humanity, which is based in San Francisco and Munich, Germany. Founded in 2019 by Alex Blania and Sam Altman — the entrepreneur known for OpenAI’s ChatGPT — the tech company says it’s “building for humans in the age of AI.”
In theory, these iris scans offer a safe and convenient way for consumers to verify their human identity at a time when AI-powered tools can easily create fake audio and images of people.
“We wanted a way to make sure that humans stayed special and essential in a world where the internet was going to have lots of AI-driven content,” said Altman, the chairman for Tools for Humanity, at a glitzy event in San Francisco last month.
Like the early stages of Facebook and PayPal, World is still in a growth phase, trying to lure enough customers to its network to eventually build a viable service.
A chief draw, World says, is that people can verify their humanness at an orb without providing personal information, such as, their names, emails, phone numbers and social media profiles.
But some are skeptical, contending that handing over biometric data is too risky. They cite instances where companies have reported data breaches or filed for bankruptcy, such as DNA research firm 23andMe.
“You can’t get new eyeballs. I don’t care what this company says. Biometric data like these retinal scans will get out. Hacks and leaks happen all the time,” said Justin Kloczko, a tech and privacy advocate at Consumer Watchdog. “Your eyeballs are going to be like gold to these thieves.”
1. An orb. 2. Frankie Reina, of West Hollywood, gets an eye scan. 3. A woman is reflected in an orb while getting an eye scan. 4. Frankie Reina waits to be verified after getting an eye scan. (Christina House / Los Angeles Times)
World has been making waves in Asia, Europe, South America and Central America. More than 12 million people have verified themselves through the orbs and roughly 26 million have downloaded the World app, where people store their World ID, digital assets and access other tools, the company says.
Now, World is setting its sights on the United States. The World app says people can claim up to 39 Worldcoin tokens, worth up to $45.49 if a user verifies they’re human with an orb.
World plans to deploy 7,500 orbs throughout the U.S. this year. It’s opening up spaces where people can scan their eyes in six cities — Los Angeles, San Francisco, Atlanta, Austin, Miami and Nashville. The L.A. space opened on Melrose Avenue last week.
Backed by well-known venture capital firms including Bain Capital, Menlo Ventures, Khosla Ventures and Andreessen Horowitz, Tools for Humanity has raised $240 million, as of March, according to Pitchbook.
The crypto eye-scanning project has stirred up plenty of buzz, but also controversy.
In places outside the United States, including Hong Kong, Spain, Portugal, Indonesia, South Korea, and Kenya, regulators have scrutinized the effort because of data privacy concerns.
Whistleblower Edward Snowden, who leaked classified details of the U.S. government’s mass surveillance program, responded to Altman’s post about the project in 2021 by saying “the human body is not a ticket-punch.”
Ashkan Soltani, the former executive director of the California Privacy Protection Agency, said that privacy risks can outweigh the benefits of handing over biometric data.
“Even if companies don’t store raw biometric data, like retina scans, the derived identifiers are immutable … and permanently linked to the individuals they were captured from,” he said in an email.
World executives counter that the orb captures photos of a person’s face and eyes, but doesn’t store any of that data. To receive a verified World ID, people can choose to send their iris image to their phone and that data are encrypted, meaning that the company can’t view or access the information.

Frankie Reina, of West Hollywood, left, gets an eye scan with the help of Myra Vides, center.
(Christina House / Los Angeles Times)
The idea for World began five years ago. Before the popularity of ChatGPT ignited an AI frenzy, Altman was on a walk with Blania in San Francisco talking about how trust would work in the age where AI systems are smarter than humans.
“The initial ideas were very crazy, then we came down to one that was just a little bit crazy, which became World,” Altman said onstage at an event about World’s U.S. debut at Fort Mason, a former U.S. Army post in San Francisco.
At the event, tech workers, influencers and even California Gov. Gavin Newsom and San Francisco Mayor Daniel Lurie wandered in and out of a large building filled with orbs, refreshments and entertainment.
Tools for Humanity Chief Executive Blania highlighted three ways people could use their verified World ID: gaming, dating and social media.
Currently, online services use a variety of ways to confirm people’s identities including video selfies, phone numbers, government-issued IDs and two-factor authentication.
World recently teamed up with gaming company Razer, based in Irvine and Singapore, to verify customers are human through a single-sign on, and is placing orbs in Razer stores.
Blania also touted a partnership with Match Group, where people can used World to verify themselves and their ages on apps such as Tinder , an effort that will be tested in Japan.
“We think the internet as a whole will need a proof of human and one space that I’m personally most excited about will be social,” Blania said at the San Francisco event.
Alex Blania, the chief executive of Tools for Humanity, speaks onstage during an event for the U.S. launch of World at Fort Mason Center on April 30 in San Francisco.
(Kimberly White / Getty Images for World)
Back at the World store in San Francisco, Zachary Sussman was eager to check out the orbs with his two friends, both in their 20s.
“For me, the more ‘Black Mirror’ the technology is, the more likely I am to use it,” Sussman said, referring to the popular Netflix sci-fi series. “I like the dystopian aesthetic.”
Doug Colaizzo, 35, checked out the store with his daughter and parents. Colaizzo, a developer, described himself as an “early adopter” of technology. He already uses his fingerprint to unlock his front door and his smartphone to pay for items.
“We need a better way of identifying humans,” he said. “I support this idea, even if this is not gonna be the one that wins.”
Andras Cser, vice president and principal analyst of Security and Risk Management at Forrester Research, said the fact that people have to go to a store to scan their eyes could limit adoption.
World is building a gadget called the “mini Orb” that’s the size of a smartphone, but convincing people to carry a separate device around will also be an uphill battle, he said.
“There’s big time hype with a ton of customer friction and privacy problems,” he said.
The company will have to convince skeptics like Klein to hand over their biometric data. The San Francisco resident is more cautious, especially after he had to delete his DNA data from 23andMe because the biotech company filed for bankruptcy.
“I’m not going to go off and live in the wilderness by myself,” he said. “Eventually, I might have to, but I’m going to resist as much as I can.”
Business
130,000 Igloo Coolers Recalled After Fingertip Amputations From Handle

About 130,000 Igloo coolers were recalled on Thursday after consumers reported 78 fingertip injuries from the cooler’s tow handle, 26 of which led to fingertip amputations, bone fractures or cuts, according to the U.S. Consumer Product Safety Commission.
This warning expands an initial recall issued in February of more than one million 90-quart Igloo Flip & Tow Rolling Coolers because the tow handle was crushing and seriously injuring people’s fingertips.
“The tow handle can pinch consumers’ fingertips against the cooler, posing fingertip amputation and crushing hazard,” the recall said.
In the February recall, the safety commission said that Igloo had received 12 reports of fingertip injuries from the coolers. Since then there have been an additional 78 reports, according to the commission.
The recalled coolers, all of which have the word “IGLOO” on the side of them, were manufactured before January 2024 and come in different colors. The manufacture date can be found on the bottom of the cooler.
The commission said the latest recall also affected about 20,000 coolers in Canada and 5,900 in Mexico, which is in addition to the tens of thousands recalled from each country in February.
Igloo said that owners who bought the coolers between January 2019 and January 2025 should stop using them and contact the company for a free replacement handle.
The company said in a statement that it stood behind the quality of its products and that consumer “safety and satisfaction” were its top priorities.
The coolers were sold at Academy, Costco, Dick’s, Target and other retailers and online stores and were usually priced between $80 and $140.
-
Cleveland, OH1 week ago
Who is Gregory Moore? Former divorce attorney charged for murder of Aliza Sherman in downtown Cleveland
-
Politics1 week ago
Trump posts AI image of himself as Pope amid Vatican's search for new pontiff
-
Politics1 week ago
Rep. Mikie Sherrill suggests third Trump impeachment as she campaigns to be next New Jersey governor
-
News1 week ago
Family statement: Rodney Hinton Jr. walked out of body camera footage meeting with CPD prior to officer death
-
News1 week ago
Are Politicians Too Old? California Democrats Want to Debate an Age Cap.
-
World1 week ago
‘Don’t see a major war with India, but have to be ready’: Pakistan ex-NSA
-
News1 week ago
Father Whose Son Was Shot by Cincinnati Police Hits Deputy With Car, Killing Him
-
News1 week ago
Federal judge strikes down Trump order targeting the law firm Perkins Coie