Business
Column: Black spatulas and mystery drones: Your guide to the unfounded panics of the season
The “silly season” of news coverage used to refer to the dog days of summer, when there was so little of importance happening that newspapers and cable channels filled the vacuum with fluff.
Not this year.
Starting in October and gaining intensity through the season, Americans have found themselves awash in panicky health and safety warnings about previously unappreciated threats.
Most people don’t look at the sky. They don’t know what airplanes look like up there, particularly at night, and they don’t know what the stars and planets look like.
— Scientist Cheryl Rofer explains the drone panic
It started with warnings about your black plastic spatulas and other such implements. Spurred by a study and press release issued Oct. 1 by the Seattle nonprofit Toxic-Free Future, news organizations from coast to coast — including The Times — posted articles advising consumers to ditch their black food utensils and children’s toys with black plastic pieces.
The black spatula panic was soon outrun by the drone panic, which has Americans scanning the skies for menacing aircraft.
As is typically the case, both of these panics springs from a nugget of truth. It’s true, for example, that chemicals that could theoretically harm people’s health at high exposure levels can be found in some household products — chiefly chemical flame retardants in black plastic electronic devices that have been banned from new uses but have been getting recycled into the consumer stream.
It’s also true that drones, ranging in size from the lightweight models deployed by hobbyists to large commercial models, are becoming a pain in the neck, with the largest craft posing a real danger to commercial aircraft.
But the distance between those nuggets of reality and the level of public hysteria is so great that the latter can be explained mostly by two factors: the desire for clicks on news sites and to fill newspaper columns, and the impulse of preening politicians to show they’re attentive to constituents’ concerns, no matter how dubious.
Let’s take these panics in order, starting with the black utensils. For a time, press advisories that people ditch their black spatulas were impossible to ignore. The most alarmist was probably an offering from The Atlantic, which was headlined: “Throw Out Your Black Plastic Spatula/It’s probably leaching chemicals into your cooking oil.”
The piece ran under an illustration of a black spatula dripping sinister goblets of melting plastic, against a background of bilious green. It gave prominent space to the Toxic-Free Future study, as well as to research papers by the British scientist Andrew Turner, who has been studying the contamination of household goods by those electronic flame retardants for years.
A few points about the Toxic-Free Future paper, which spurred all that news coverage. First, it’s based in part on a massive mathematical error. The paper calculates that users of “contaminated kitchen utensils” would have a median intake of BDE-209, one of the common flame retardants, of 34,700 nanograms per day. (A nanogram is a billionth of a gram.)
The paper states that this daily exposure “would approach” the reference dose set by the U.S. Environmental Protection Agency of 7,000 nanograms per kilogram of body weight per day, which the the paper says pencils out at 42,000 nanograms per day for a 60-kilogram adult. Pretty good ground for concern, since the EPA uses the reference dose to measure the level of health risk from exposure to a toxin.
Except: 7,000 times 60 isn’t 42,000; it’s 420,000. The median intake for a 60-kilogram adult, in other words, isn’t anywhere close to the EPA’s reference dose.
Toxic-Free Future has issued a correction to its paper, acknowledging that the daily intake it calculated doesn’t “approach” the EPA reference dose but is one-tenth of the reference dose. (The Times has followed up with an article about the correction; several other publications that went to town on the black utensil threat have also done so.) But it also says “the calculation error does not affect the overall conclusion of the paper.”
Megan Liu, the paper’s lead author, told me that it wasn’t really designed as a risk assessment, but chiefly as a study of how much of these contaminants has entered the consumer economy through kitchen utensils, children’s toys and other products. “Flame retardants shouldn’t even be in these products at all,” she says, which is true.
Yet the issue for the average consumer is how dangerous are these products, really? The answer is, not very.
In a study cited by Liu’s paper, researchers found that some chemicals leached from a black spatula into cooking oil.
The Atlantic’s take on this was that the paper “found that flame retardants in black kitchen utensils readily migrate into hot cooking oil.” Not so readily, however: The researchers cut a black spatula into small pieces and basted them in 320-degree cooking oil for 15 minutes. Who does that? As epidemiologist Gideon Meyerowitz-Katz points out, “most people don’t leave their spatulas in the fryer and walk away for a quarter of an hour.”
More issues are related to this paper. One is that 60 kilograms, or about 132 pounds, isn’t the average weight of American adults. The U.S. Centers for Disease Control and Preventgion places the average weight for an adult male at about 200 pounds, and for a female about 171.
Using those weights would have shown that the potential for health effects is even more remote than the overheated news coverage of the paper suggests. In any case, the evidence for long-term human health effects from the normal exposure to these chemicals is scanty. It comes almost entirely from experiments on lab mice and rats subjected to doses unlikely to occur in the real world, and to an experiment on human cells also in the laboratory.
Of course, if you’re inclined to eliminate all artifacts of modern commerce from your life, no one is stopping you. Liu and her colleagues observe that kitchen implements made from wood or stainless steel are widely available. They’ve also properly noted that among the real problems with the recycling of plastics in consumer goods is that we don’t know anything about how much goes into which products and where they’ve come from.
Some legislatures have moved toward requiring more disclosure, which is to the good. But if you spent the last few weeks or months doing a hard target search for black implements in your house, you probably didn’t have to.
Now on to the drones. When I first heard of New Jersey residents expressing panic over mysterious lights overhead, I flashed on the Firesign Theatre line, “Big light in sky slated to appear in East.” Except that the Firesign Theatre was a satire troupe of the 1960s and ‘70s, the line originated in their parody of a post-apocalyptic news broadcast, and the game was given away by the title of their best album, “Don’t Crush that Dwarf, Hand Me the Pliers.” The current panic appears to be for real.
All the worrying got me thinking about the interview I conducted in September with Sean M. Kirkpatrick, who had recently retired as the Pentagon’s chief investigator of UFO reports. As he had written in a Scientific American op-ed, he and his team had been overwhelmed by a “whirlwind of tall tales, fabrication and secondhand or thirdhand retellings of the same,” producing “a social media frenzy and a significant amount of congressional and executive time and energy spent on investigating these so-called claims.”
Sound familiar?
The claims of an invasion of the Eastern seaboard by swarms of drones has every marker of a groundless social media frenzy. This started with some truly baroque partisan speculation; on Dec. 11, Rep. Jeff Van Drew (R-N.J.) cadged himself some airtime on Fox News by claiming that his home state was under attack from Iran.
“I’m going to tell you the real deal,” he said. “Iran launched a mother ship that contains these drones. It’s off the East Coast of the United States of America. They’ve launched drones.”
Three days later, New York Gov. Kathy Hochul, a Democrat, declared “this has gone too far,” grousing that mystery drones had closed down a metropolitan New York airport. The bare-bones reporting on this event might have made people think that JFK or LaGuardia had been attacked by mystery drones. In fact, the airport was Stewart Airport, which is 60 miles from Manhattan, is served mostly by the ultra-low-cost Allegiant Airlines with routes to Florida, and was closed for one hour.
My favorite performance was that of former Maryland Gov. Larry Hogan, a Republican, who reported via X that on Dec. 12 he “personally witnessed (and videoed) what appeared to be dozens of large drones in the sky above my residence … (25 miles from our nation’s capital). I observed the activity for approximately 45 minutes.”
It didn’t take long for Hogan to be inundated with responses from astronomers and meteorologists that what he had videotaped weren’t drones flying over his house, but the constellation Orion, which as meteorologist Matthew Cappucci informed him crisply, is “made up of stars between 244 and 1,344 light years away.”
Since then, neighborhood groups in New Jersey have organized “sky watches” to track the invading swarms and traded reports via their Ring doorbells. Donald Trump advised people to shoot the drones down, which is a good way to make things worse.
Some people conjecture that the drone hysteria is the product of the public’s mistrust of government. That doesn’t explain much, since a large share of the hysteria has been promoted by elected officials themselves. Politicians are naturally averse to calling their constituents idiots, so they have been responding by demanding more transparency from government officials at the Pentagon and other agencies. It’s always safe for politicians to assure voters that they’ll hold bureaucrats’ feet to the fire.
The problem here is that government agencies have been very clear about what’s happening overhead. The “drone” sightings, they say, are of commercial or U.S. military aircraft, helicopters, and perhaps drone flights by hobbyists wanting to get in on the fun. Most of it is surely the product of ignorance. How much more do we need federal agencies to explain?
“Most people don’t look at the sky,” notes Cheryl Rofer, a retired nuclear scientist. “They don’t know what airplanes look like up there, particularly at night, and they don’t know what the stars and planets look like. They can’t estimate distance — which is tricky in the sky — and they aren’t aware of how things can seem to move. They aren’t aware of how to check if those objects in fact are moving.”
There may be one other explanation for why there are so many purported drone sightings in New Jersey. As the blogger Kevin Drum writes, there are a lot of drones in New Jersey, in part because a state law “indemnifies drone fliers against lawsuits from New Jersey landowners for use of their property for drone overflights.”
So, sure. New Jersey loves drones, which nobody noticed until a local congressman decided to blame Iran.
That should cover the hysterias of the moment. Black spatulas won’t kill you, and the lights in the sky aren’t alien spaceships or Iranian bombers. Any questions?
Business
A new delivery bot is coming to L.A., built stronger to survive in these streets
The rolling robots that deliver groceries and hot meals across Los Angeles are getting an upgrade.
Coco Robotics, a UCLA-born startup that’s deployed more than 1,000 bots across the country, unveiled its next-generation machines on Thursday.
The new robots are bigger, tougher and better equipped for autonomy than their predecessors. The company will use them to expand into new markets and increase its presence in Los Angeles, where it makes deliveries through a partnership with DoorDash.
Dubbed Coco 2, the next-gen bots have upgraded cameras and front-facing lidar, a laser-based sensor used in self-driving cars. They will use hardware built by Nvidia, the Santa Clara-based artificial intelligence chip giant.
Coco co-founder and chief executive Zach Rash said Coco 2 will be able to make deliveries even in conditions unsafe for human drivers. The robot is fully submersible in case of flooding and is compatible with special snow tires.
Zach Rash, co-founder and CEO of Coco, opens the top of the new Coco 2 (Next-Gen) at the Coco Robotics headquarters in Venice.
(Kayla Bartkowski/Los Angeles Times)
Early this month, a cute Coco was recorded struggling through flooded roads in L.A.
“She’s doing her best!” said the person recording the video. “She is doing her best, you guys.”
Instagram followers cheered the bot on, with one posting, “Go coco, go,” and others calling for someone to help the robot.
“We want it to have a lot more reliability in the most extreme conditions where it’s either unsafe or uncomfortable for human drivers to be on the road,” Rash said. “Those are the exact times where everyone wants to order.”
The company will ramp up mass production of Coco 2 this summer, Rash said, aiming to produce 1,000 bots each month.
The design is sleek and simple, with a pink-and-white ombré paint job, the company’s name printed in lowercase, and a keypad for loading and unloading the cargo area. The robots have four wheels and a bigger internal compartment for carrying food and goods .
Many of the bots will be used for expansion into new markets across Europe and Asia, but they will also hit the streets in Los Angeles and operate alongside the older Coco bots.
Coco has about 300 bots in Los Angeles already, serving customers from Santa Monica and Venice to Westwood, Mid-City, West Hollywood, Hollywood, Echo Park, Silver Lake, downtown, Koreatown and the USC area.
The new Coco 2 (Next-Gen) drives along the sidewalk at the Coco Robotics headquarters in Venice.
(Kayla Bartkowski/Los Angeles Times)
The company is in discussion with officials in Culver City, Long Beach and Pasadena about bringing autonomous delivery to those communities.
There’s also been demand for the bots in Studio City, Burbank and the San Fernando Valley, according to Rash.
“A lot of the markets that we go into have been telling us they can’t hire enough people to do the deliveries and to continue to grow at the pace that customers want,” Rash said. “There’s quite a lot of area in Los Angeles that we can still cover.”
The bots already operate in Chicago, Miami and Helsinki, Finland. Last month, they arrived in Jersey City, N.J.
Late last year, Coco announced a partnership with DashMart, DoorDash’s delivery-only online store. The partnership allows Coco bots to deliver fresh groceries, electronics and household essentials as well as hot prepared meals.
With the release of Coco 2, the company is eyeing faster deliveries using bike lanes and road shoulders as opposed to just sidewalks, in cities where it’s safe to do so. Coco 2 can adapt more quickly to new environments and physical obstacles, the company said.
Zach Rash, co-founder and CEO of Coco.
(Kayla Bartkowski/Los Angeles Times)
Coco 2 is designed to operate autonomously, but there will still be human oversight in case the robot runs into trouble, Rash said. Damaged sidewalks or unexpected construction can stop a bot in its tracks.
The need for human supervision has created a new field of jobs for Angelenos.
Though there have been reports of pedestrians bullying the robots by knocking them over or blocking their path, Rash said the community response has been overall positive. The bots are meant to inspire affection.
“One of the design principles on the color and the name and a lot of the branding was to feel warm and friendly to people,” Rash said.
Coco plans to add thousands of bots to its fleet this year. The delivery service got its start as a dorm room project in 2020, when Rash was a student at UCLA. He co-founded the company with fellow student Brad Squicciarini.
The Santa Monica-based company has completed more than 500,000 zero-emission deliveries and its bots have collectively traveled around 1 million miles.
Coco chooses neighborhoods to deploy its bots based on density, prioritizing areas with restaurants clustered together and short delivery distances as well as places where parking is difficult.
The robots can relieve congestion by taking cars and motorbikes off the roads. Rash said there is so much demand for delivery services that the company’s bots are not taking jobs from human drivers.
Instead, Coco can fill gaps in the delivery market while saving merchants money and improving the safety of city streets.
“This vehicle is inherently a lot safer for communities than a car,” Rash said. “We believe our vehicles can operate the highest quality of service and we can do it at the lowest price point.”
Business
Trump orders federal agencies to stop using Anthropic’s AI after clash with Pentagon
President Trump on Friday directed federal agencies to stop using technology from San Francisco artificial intelligence company Anthropic, escalating a high-profile clash between the AI startup and the Pentagon over safety.
In a Friday post on the social media site Truth Social, Trump described the company as “radical left” and “woke.”
“We don’t need it, we don’t want it, and will not do business with them again!” Trump said.
The president’s harsh words mark a major escalation in the ongoing battle between some in the Trump administration and several technology companies over the use of artificial intelligence in defense tech.
Anthropic has been sparring with the Pentagon, which had threatened to end its $200-million contract with the company on Friday if it didn’t loosen restrictions on its AI model so it could be used for more military purposes. Anthropic had been asking for more guarantees that its tech wouldn’t be used for surveillance of Americans or autonomous weapons.
The tussle could hobble Anthropic’s business with the government. The Trump administration said the company was added to a sweeping national security blacklist, ordering federal agencies to immediately discontinue use of its products and barring any government contractors from maintaining ties with it.
Defense Secretary Pete Hegseth, who met with Anthropic’s Chief Executive Dario Amodei this week, criticized the tech company after Trump’s Truth Social post.
“Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” he wrote Friday on social media site X.
Anthropic didn’t immediately respond to a request for comment.
Anthropic announced a two-year agreement with the Department of Defense in July to “prototype frontier AI capabilities that advance U.S. national security.”
The company has an AI chatbot called Claude, but it also built a custom AI system for U.S. national security customers.
On Thursday, Amodei signaled the company wouldn’t cave to the Department of Defense’s demands to loosen safety restrictions on its AI models.
The government has emphasized in negotiations that it wants to use Anthropic’s technology only for legal purposes, and the safeguards Anthropic wants are already covered by the law.
Still, Amodei was worried about Washington’s commitment.
“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” he said in a blog post. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
Tech workers have backed Anthropic’s stance.
Unions and worker groups representing 700,000 employees at Amazon, Google and Microsoft said this week in a joint statement that they’re urging their employers to reject these demands as well if they have additional contracts with the Pentagon.
“Our employers are already complicit in providing their technologies to power mass atrocities and war crimes; capitulating to the Pentagon’s intimidation will only further implicate our labor in violence and repression,” the statement said.
Anthropic’s standoff with the U.S. government could benefit its competitors, such as Elon Musk’s xAI or OpenAI.
Sam Altman, chief executive of OpenAI, the company behind ChatGPT and one of Anthropic’s biggest competitors, told CNBC in an interview that he trusts Anthropic.
“I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” he said. “I’m not sure where this is going to go.”
Anthropic has distinguished itself from its rivals by touting its concern about AI safety.
The company, valued at roughly $380 billion, is legally required to balance making money with advancing the company’s public benefit of “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”
Developers, businesses, government agencies and other organizations use Anthropic’s tools. Its chatbot can generate code, write text and perform other tasks. Anthropic also offers an AI assistant for consumers and makes money from paid subscriptions as well as contracts. Unlike OpenAI, which is testing ads in ChatGPT, Anthropic has pledged not to show ads in its chatbot Claude.
The company has roughly 2,000 employees and has revenue equivalent to about $14 billion a year.
Business
Video: The Web of Companies Owned by Elon Musk
new video loaded: The Web of Companies Owned by Elon Musk

By Kirsten Grind, Melanie Bencosme, James Surdam and Sean Havey
February 27, 2026
-
World3 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts3 days agoMother and daughter injured in Taunton house explosion
-
Montana1 week ago2026 MHSA Montana Wrestling State Championship Brackets And Results – FloWrestling
-
Louisiana5 days agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Denver, CO3 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Technology1 week agoYouTube TV billing scam emails are hitting inboxes
-
Technology1 week agoStellantis is in a crisis of its own making
-
Politics1 week agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT