Business
Column: Molly White's message for journalists going freelance — be ready for the pitfalls
Molly White is the model of an indefatigable and intrepid journalist. Through her website Web3 is Going Just Great and newsletter Citation Needed, she keeps tabs on the hacks, scams, failures, hype and assorted legal difficulties swirling about the cryptocurrency world.
She’s also independent, which means she’s unprotected by the fortification of lawyers and resources erected by the owners of newspapers such as The Times to fend off legal threats, frivolous and otherwise, that are part of the arsenal of people and firms we write about.
So she has some advice for journalists tempted by the burden of having bosses to “just go independent,” enticed, say, by the siren call of freelancing: “Just do a substack! It’s the future of journalism.”
I am the legal team. I am the fact-checking department. I am the editorial staff. I am the one responsible for triple-checking every single statement I make in the type of original reporting that I know carries a serious risk of baseless but ruinously expensive litigation regularly used to silence journalists, critics, and whistleblowers.
— Molly White
White’s warning is, in a nutshell: “It’s not for everyone.”
Anyone who follows crypto scams is familiar with White’s work. A software engineer by training, she is a longtime Wikipedia editor who got interested in the dark underbelly of crypto when she tried to write a Wikipedia article about it.
She doesn’t find much if anything to like about the field, which she sees as a hive of people aiming to take advantage of the innocent and unwary — the facetious subtitle of her Web3 website calls it “definitely not an enormous grift that’s pouring lighter fluid on our already smoldering planet.”
But she does it all by herself.
“As an independent writer and publisher,” White wrote recently, “I am the legal team. I am the fact-checking department. I am the editorial staff. I am the one responsible for triple-checking every single statement I make in the type of original reporting that I know carries a serious risk of baseless but ruinously expensive litigation regularly used to silence journalists, critics, and whistleblowers…. I am the one who ultimately could be financially ruined by such a lawsuit. I am the one in charge of weighing whether I should spring for the type of insurance that is standard fare for big outlets to protect themselves and their staff, but often prohibitively expensive for independent writers.”
In recent weeks, White has had to fend off a couple of fatuous legal threats stemming from her work — one from a putative lawyer demanding that she take down a post for infringing a copyright under the Digital Millennium Copyright Act (it wasn’t an infringement), and some sinister legalistic-sounding noise from the crypto platform Coinbase. We’ll return to both in a moment.
Experts in the potholes and pitfalls facing writers — especially investigation-minded or merely activist journalists — say they’ve received a rising number of inquiries from those considering launching a freelance career. Lloyd Jassin, a New York lawyer specializing in publishing law — including copyright and libel law, among other issues important to independent writers — says he’s referred several clients to brokers who represent insurance firms for writers in the last few months.
Curiosity about the freelance life is rising for several reasons. Mass layoffs in the media industry have put thousands of journalists on the street, forcing them to ponder new ways to exercise their professional skills.
Substack and other such platforms purport to offer writers a way to acquire followers of their own, building their personal brands. And the performance of established news media in the recent election, including the decision of the owners of The Times and the Washington Post not to endorse a presidential candidate, may have inspired established staffers to consider an exit from corporate media.
Independent writers’ works are protected, if theoretically, by U.S. libel laws, which discourage defamation lawsuits by public figures, and by so-called SLAPP laws, which discourage “strategic lawsuits against public participation” — that is, lawsuits designed chiefly to intimidate or silence critics. But exercising one’s rights under those laws can require hiring a lawyer, sometimes at considerable expense. Plaintiffs deemed to have filed a SLAPP lawsuit can be required to cover the defendant’s legal costs, but that would happen only after motions in court.
White is no stranger to efforts to intimidate her. The most concentrated pushback she has received recently has come from Coinbase. The crypto platform is irked at White’s reporting that it may have violated federal law by making political contributions while negotiating for and subsequently holding a federal contract.
In conjunction with the watchdog group Public Citizen, White filed a formal complaint against Coinbase with the Federal Election Commission on Aug. 1. In her reporting, White has shown that some of its contributions to the crypto industry super PAC Fairshake were made within the period in which political contributions are barred, which extends from the start of a contributor’s contract negotiations through the completion of the contract. The U.S. Marshals Service awarded Coinbase the $7-million, one-year contract to help manage the government’s hoard of seized crypto assets in July.
Coinbase hasn’t responded directly to White. Its response to the accusation has come through a series of tweets by its chief legal officer, Paul Grewal.
The gist of Grewal’s argument is that the funding for Coinbase’s contract comes from seized crypto assets in the Justice Department’s Assets Forfeiture Fund, not from congressional appropriations. Therefore, he contends, Coinbase didn’t violate the law prohibiting political contributions by contractors paid from “funds appropriated by the Congress.”
“Seized crypto assets are not Congressionally appropriated funds, period,” Grewal wrote.
As it happens, the legal question is far from being so cut and dried. In fact, the definition of “appropriated” was settled conclusively by the Supreme Court, in a 7-2 decision handed down in May and written by Justice Clarence Thomas. The only dissenters were justices Samuel A. Alito Jr. and Neil M. Gorsuch.
In that case, the justices turned away a challenge to the funding of the Consumer Financial Protection Bureau, which derives from the Federal Reserve System. (The plaintiffs made an elaborately legalistic argument that such funding violates the “appropriations clause” of the Constitution and therefore the CFPB is unconstitutional.)
Thomas wrote that the plaintiffs had offered “no defensible argument” that the appropriations clause requires more than a congressional law authorizing “the disbursement of specified funds for identified purposes,” as was the funding for the CFPB.
By extension, so is the funding for the Coinbase contract. Indeed, the Congressional Research Service, in a close examination of the Assets Forfeiture Fund in 2015, found that for most purposes, the fund was the beneficiary of “a permanent appropriation” by Congress.
Grewal went further. Noting that he had placed his interpretation of the law on the record, he wrote that “repeating misrepresentations of facts after previously being put on notice is …. unwise.”
That sinister ellipsis is Grewal’s.
Grewal told me by email that no legal threat was implied by his tweet, and that Coinbase “certainly would make plain if it were our intent” to progress to a lawsuit.
Still, White interpreted Grewal’s tweet as “certainly a threat of something. I don’t think Coinbase is going to come and break my kneecaps, so a legal threat is the most obvious interpretation. It seems like a pretty clear threat to stop writing about this, or else.”
Public Citizen is sanguine about Coinbase’s swaggering. “Whenever corporate misconduct is pointed out, they always say ‘We didn’t really break the law, or the law doesn’t apply to us the way you think it does,’” says Rick Claypool, a research director at Public Citizen who co-filed the complaint with White. “It would be surprising if they said, ‘Oh, yeah, you’re right, whoops.’ Going up against a Goliath, they have a lot of strength to squish the Davids coming after them.”
Separately, White fielded a “takedown” notice from supposed representatives of Roman Ziemian, a co-founder of the alleged crypto pyramid scheme FutureNet. In an Aug. 19 post on Web3 is Going Just Great, White posted news reports that Ziemian had been arrested in Montenegro, and that he faces international warrants from authorities in Poland and South Korea.
The representatives tried to bribe her $500 to take down the post. When she refused, they copied the post to a blogging website, backdated it, and then claimed she had plagiarized it in an example of copyright infringement. She posted the notice, which came from a purported lawyer named Michael Woods with a Los Angeles address that doesn’t exist in Postal Service records. He didn’t respond to a message I left at the telephone number he listed.
How can independent journalists keep intimidation efforts like these at arm’s length? The goal of those threatening legal action, no matter how frivolous, is “to suppress criticism,” Jassin says. “Being a good journalist is the first defense,” he adds, so getting the facts right is indispensable.
White doesn’t keep a lawyer on retainer, but she knows lawyers who are “willing to glance at something I’ve received in my email inbox and reach out to offer support should one of those threats escalate into something more tangible” — which hasn’t yet happened.
“In a perfect world, reporting the facts would be enough to avoid frivolous lawsuits,” she told me. “But obviously, companies and people with resources are willing to file frivolous lawsuits regardless. That is a risk I take on, with hopes that being cautious and being very careful about fact-checking will at least stave off the worst.”
She advises journalists thinking about going independent to “think through if it would be life-altering to be on the risky end of an actual lawsuit.” There are ways, she notes, to “structure your business so you’re not risking your personal assets,” including finding insurance to cover one’s legal defense.
“Legal threats are only one component” of life as a freelancer. “There are a lot of other challenges — you don’t have employer-sponsored healthcare, or a 401k. A lot of readers think it’s an easy decision to quit a job and go independent. But despite all the challenges, I really love being independent.”
Business
A new delivery bot is coming to L.A., built stronger to survive in these streets
The rolling robots that deliver groceries and hot meals across Los Angeles are getting an upgrade.
Coco Robotics, a UCLA-born startup that’s deployed more than 1,000 bots across the country, unveiled its next-generation machines on Thursday.
The new robots are bigger, tougher and better equipped for autonomy than their predecessors. The company will use them to expand into new markets and increase its presence in Los Angeles, where it makes deliveries through a partnership with DoorDash.
Dubbed Coco 2, the next-gen bots have upgraded cameras and front-facing lidar, a laser-based sensor used in self-driving cars. They will use hardware built by Nvidia, the Santa Clara-based artificial intelligence chip giant.
Coco co-founder and chief executive Zach Rash said Coco 2 will be able to make deliveries even in conditions unsafe for human drivers. The robot is fully submersible in case of flooding and is compatible with special snow tires.
Zach Rash, co-founder and CEO of Coco, opens the top of the new Coco 2 (Next-Gen) at the Coco Robotics headquarters in Venice.
(Kayla Bartkowski/Los Angeles Times)
Early this month, a cute Coco was recorded struggling through flooded roads in L.A.
“She’s doing her best!” said the person recording the video. “She is doing her best, you guys.”
Instagram followers cheered the bot on, with one posting, “Go coco, go,” and others calling for someone to help the robot.
“We want it to have a lot more reliability in the most extreme conditions where it’s either unsafe or uncomfortable for human drivers to be on the road,” Rash said. “Those are the exact times where everyone wants to order.”
The company will ramp up mass production of Coco 2 this summer, Rash said, aiming to produce 1,000 bots each month.
The design is sleek and simple, with a pink-and-white ombré paint job, the company’s name printed in lowercase, and a keypad for loading and unloading the cargo area. The robots have four wheels and a bigger internal compartment for carrying food and goods .
Many of the bots will be used for expansion into new markets across Europe and Asia, but they will also hit the streets in Los Angeles and operate alongside the older Coco bots.
Coco has about 300 bots in Los Angeles already, serving customers from Santa Monica and Venice to Westwood, Mid-City, West Hollywood, Hollywood, Echo Park, Silver Lake, downtown, Koreatown and the USC area.
The new Coco 2 (Next-Gen) drives along the sidewalk at the Coco Robotics headquarters in Venice.
(Kayla Bartkowski/Los Angeles Times)
The company is in discussion with officials in Culver City, Long Beach and Pasadena about bringing autonomous delivery to those communities.
There’s also been demand for the bots in Studio City, Burbank and the San Fernando Valley, according to Rash.
“A lot of the markets that we go into have been telling us they can’t hire enough people to do the deliveries and to continue to grow at the pace that customers want,” Rash said. “There’s quite a lot of area in Los Angeles that we can still cover.”
The bots already operate in Chicago, Miami and Helsinki, Finland. Last month, they arrived in Jersey City, N.J.
Late last year, Coco announced a partnership with DashMart, DoorDash’s delivery-only online store. The partnership allows Coco bots to deliver fresh groceries, electronics and household essentials as well as hot prepared meals.
With the release of Coco 2, the company is eyeing faster deliveries using bike lanes and road shoulders as opposed to just sidewalks, in cities where it’s safe to do so. Coco 2 can adapt more quickly to new environments and physical obstacles, the company said.
Zach Rash, co-founder and CEO of Coco.
(Kayla Bartkowski/Los Angeles Times)
Coco 2 is designed to operate autonomously, but there will still be human oversight in case the robot runs into trouble, Rash said. Damaged sidewalks or unexpected construction can stop a bot in its tracks.
The need for human supervision has created a new field of jobs for Angelenos.
Though there have been reports of pedestrians bullying the robots by knocking them over or blocking their path, Rash said the community response has been overall positive. The bots are meant to inspire affection.
“One of the design principles on the color and the name and a lot of the branding was to feel warm and friendly to people,” Rash said.
Coco plans to add thousands of bots to its fleet this year. The delivery service got its start as a dorm room project in 2020, when Rash was a student at UCLA. He co-founded the company with fellow student Brad Squicciarini.
The Santa Monica-based company has completed more than 500,000 zero-emission deliveries and its bots have collectively traveled around 1 million miles.
Coco chooses neighborhoods to deploy its bots based on density, prioritizing areas with restaurants clustered together and short delivery distances as well as places where parking is difficult.
The robots can relieve congestion by taking cars and motorbikes off the roads. Rash said there is so much demand for delivery services that the company’s bots are not taking jobs from human drivers.
Instead, Coco can fill gaps in the delivery market while saving merchants money and improving the safety of city streets.
“This vehicle is inherently a lot safer for communities than a car,” Rash said. “We believe our vehicles can operate the highest quality of service and we can do it at the lowest price point.”
Business
Trump orders federal agencies to stop using Anthropic’s AI after clash with Pentagon
President Trump on Friday directed federal agencies to stop using technology from San Francisco artificial intelligence company Anthropic, escalating a high-profile clash between the AI startup and the Pentagon over safety.
In a Friday post on the social media site Truth Social, Trump described the company as “radical left” and “woke.”
“We don’t need it, we don’t want it, and will not do business with them again!” Trump said.
The president’s harsh words mark a major escalation in the ongoing battle between some in the Trump administration and several technology companies over the use of artificial intelligence in defense tech.
Anthropic has been sparring with the Pentagon, which had threatened to end its $200-million contract with the company on Friday if it didn’t loosen restrictions on its AI model so it could be used for more military purposes. Anthropic had been asking for more guarantees that its tech wouldn’t be used for surveillance of Americans or autonomous weapons.
The tussle could hobble Anthropic’s business with the government. The Trump administration said the company was added to a sweeping national security blacklist, ordering federal agencies to immediately discontinue use of its products and barring any government contractors from maintaining ties with it.
Defense Secretary Pete Hegseth, who met with Anthropic’s Chief Executive Dario Amodei this week, criticized the tech company after Trump’s Truth Social post.
“Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” he wrote Friday on social media site X.
Anthropic didn’t immediately respond to a request for comment.
Anthropic announced a two-year agreement with the Department of Defense in July to “prototype frontier AI capabilities that advance U.S. national security.”
The company has an AI chatbot called Claude, but it also built a custom AI system for U.S. national security customers.
On Thursday, Amodei signaled the company wouldn’t cave to the Department of Defense’s demands to loosen safety restrictions on its AI models.
The government has emphasized in negotiations that it wants to use Anthropic’s technology only for legal purposes, and the safeguards Anthropic wants are already covered by the law.
Still, Amodei was worried about Washington’s commitment.
“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” he said in a blog post. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
Tech workers have backed Anthropic’s stance.
Unions and worker groups representing 700,000 employees at Amazon, Google and Microsoft said this week in a joint statement that they’re urging their employers to reject these demands as well if they have additional contracts with the Pentagon.
“Our employers are already complicit in providing their technologies to power mass atrocities and war crimes; capitulating to the Pentagon’s intimidation will only further implicate our labor in violence and repression,” the statement said.
Anthropic’s standoff with the U.S. government could benefit its competitors, such as Elon Musk’s xAI or OpenAI.
Sam Altman, chief executive of OpenAI, the company behind ChatGPT and one of Anthropic’s biggest competitors, told CNBC in an interview that he trusts Anthropic.
“I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” he said. “I’m not sure where this is going to go.”
Anthropic has distinguished itself from its rivals by touting its concern about AI safety.
The company, valued at roughly $380 billion, is legally required to balance making money with advancing the company’s public benefit of “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”
Developers, businesses, government agencies and other organizations use Anthropic’s tools. Its chatbot can generate code, write text and perform other tasks. Anthropic also offers an AI assistant for consumers and makes money from paid subscriptions as well as contracts. Unlike OpenAI, which is testing ads in ChatGPT, Anthropic has pledged not to show ads in its chatbot Claude.
The company has roughly 2,000 employees and has revenue equivalent to about $14 billion a year.
Business
Video: The Web of Companies Owned by Elon Musk
new video loaded: The Web of Companies Owned by Elon Musk

By Kirsten Grind, Melanie Bencosme, James Surdam and Sean Havey
February 27, 2026
-
World3 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts3 days agoMother and daughter injured in Taunton house explosion
-
Montana1 week ago2026 MHSA Montana Wrestling State Championship Brackets And Results – FloWrestling
-
Louisiana5 days agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Denver, CO3 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Technology1 week agoYouTube TV billing scam emails are hitting inboxes
-
Technology1 week agoStellantis is in a crisis of its own making
-
Politics1 week agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT