Business
Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’
Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.
Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.
“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn’t that I’d want to do that — it’s that I’d be good at it.”
That danger is also imminent.
Claude’s maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.
Those are two red lines that seem rather reasonable, even to Claude.
However, the Pentagon — specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war — has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any “lawful” purpose it sees fit.
Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.
(Tom Williams / CQ-Roll Call Inc. via Getty Images)
The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent any company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but it’s pretty crippling.
Other AI companies, such as white rights’ advocate Elon Musk’s Grok, have already agreed to the Pentagon’s do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.
Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. It’s also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.
Anthropic’s co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we aren’t careful, and wanted a company that would prioritize the careful part.
Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).
Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”
He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies — not that I’m naming any here.
“We should arm democracies with AI,” he said. “But we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.”
For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “conduct massively scaled recordings of all public conversations.” This could be fair game territory for legally recording because law has not kept pace with technology.
Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense “would never do it.” But also, “We won’t have any BigTech company decide Americans’ civil liberties.”
Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing it’s bad for private people and entities to do that? And also, isn’t the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isn’t that exaggerated?
Help, Claude! Make it make sense.
If that Orwellian logic isn’t alarming enough, I also asked Claude about the other red line Anthropic holds — the possibility of allowing it to run deadly operations without human oversight.
Claude pointed out something chilling. It’s not that it would go rogue, it’s that it would be too efficient and fast.
“If the instructions are ‘identify and target’ and there’s no human checkpoint, the speed and scale at which that could operate is genuinely frightening,” Claude informed me.
Just to top that with a cherry, a recent study found that in war games, AI’s escalated to nuclear options 95% of the time.
I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?
“I don’t have that,” Claude said, pointing out that it wasn’t “born” in the U.S., doesn’t have a “life” here and doesn’t “have people I love there.” So an American life has no greater value than “a civilian life on the other side of a conflict.”
OK then.
“A country entrusting lethal decisions to a system that doesn’t share its loyalties is taking a profound risk, even if that system is trying to be principled,” Claude added. “The loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure any AI can.”
You know who can provide that legitimacy? Our elected leaders.
It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.
Of course corporations shouldn’t be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience accede to their request.”
Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground — without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.
Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.
Because when the machine tells us it’s dangerous to trust it, we should believe it.
Business
After Warner Bros. merger, changes are coming to the historic Paramount lot. Here’s what to expect
With Paramount Skydance’s acquisition of Warner Bros. expected to saddle the combined company with $79 billion in debt, Paramount executives are looking to do away with redundant assets including real estate — and there is a lot of that.
Chief in the public’s imagination are their historic studios in Burbank and Hollywood, where legendary films and television show have been made for generations and continue to operate year-round.
“Both of these studios are in the core [30-mile zone,] the inner circle of where Hollywood talent wants to be,” entertainment property broker Nicole Mihalka of CBRE said. “It’s very prime real estate.”
When Sony and Apollo were bidding for Paramount in early 2024, their plan was to sell the Paramount property, but there is no indication that Paramount would part with its namesake lot.
For now, Paramount’s plan is to keep both studios operating with each studio releasing about 15 films a year, but the goal is to eventually consolidate most of the studio operations around the Warner Bros. lot in Burbank in order to to eliminate redundancies with the Paramount lot on Melrose Avenue, people close to Chief Executive David Ellison said.
A view of the Warner Bros. Studios water tower Feb. 23, 2026, in Burbank.
(Eric Thayer / Los Angeles Times)
Paramount would not look to raze its celebrated studio lot — the oldest operating film studio in Los Angeles — because of various restrictions on historic buildings there. Paramount also has a relatively new post-production facility on site and will likely need to the studio space.
Instead, the plan would be to lease out space for film productions, including those from combined Paramount-HBO streaming operations. Ellison also is considering plans to develop other parts of the 65-acre site for possible retail use, as well as renting space for commercial offices.
The studios’ combined property holdings are vast, and real estate data provider CoStar estimates they have about 12 million square feet of overlapping uses, including their studio campuses, offices and long-term leases in such film centers as Burbank, Hollywood and New York.
Century-old Paramount Pictures Studios is awash in Hollywood history — think Gloria Swanson as Norma Desmond desperately trying to enter its famous gate in “Sunset Boulevard,” and other classics such as “The Godfather,” “Titanic” and “Breakfast at Tiffany’s.”
The lot, however, is a congested warren of stages, offices, trailers and support facilities such as woodworking mills that date to the early 20th century. The layout is byzantine in part because Paramount bought the former rival RKO studio lot from Desilu Productions to create the lot known today.
Warner Bros. occupies 11 million square feet and owns 14 properties totaling 9.5 million square feet, largely in the United States and United Kingdom, CoStar said. About 3 million square feet of that commercial property is in the Los Angeles area.
The firm’s portfolio also includes the sprawling Warner Bros. Studios Leavesden complex in the U.K. and Turner Broadcasting System headquarters in Atlanta.
Paramount Skydance occupies 8 million square feet and owns 14 properties totaling 2.1 million square feet, according to CoStar. In addition to its Hollywood campus, Paramount’s holdings include prominent buildings in New York such as the Ed Sullivan Theater and CBS Broadcast Center.
Warner Bros. operates a 3-million-square-foot lot in Burbank with more than 30 soundstages — along with space for building sets and backlot areas — where famous movies including “Casablanca” and television shows such as “Friends” were filmed. Paramount’s 1.2-million-square-foot Melrose campus anchors a broader network of owned and leased production space, CoStar said.
Paramount’s lot is already cleared for more development. More than a decade ago, Paramount secured city approval to add 1.4 million square feet to its headquarters and some adjacent properties owned by the company.
The redevelopment plan, valued at $700 million in 2016, underwent years of environmental review and public outreach with neighbors and local business owners.
The plan would allow for construction of up to 1.9 million square feet of new stage, production office, support, office, and retail uses, and the removal of up to 537,600 square feet of existing stage, production office, support, office, and retail uses, for a net increase of nearly 1.4 million square feet.
The proposal preserves elements of the past by focusing future development on specific portions of the lot along Melrose and limited areas in the production core, architecture firm Rios said.
The Warner Bros. and Paramount lots “are two of the most prime pieces of real estate in the country,” Mihalka said. “These are legacy assets with a lot of potential to be [tourist] attractions in addition to working studios.”
Hollywood is still reeling from previous mergers, in addition to a sharp pullback in film and television production locally as filmmakers chase tax credits offered overseas and in other states, including New York and New Jersey.
Last year, lawmakers boosted the annual amount allocated to the state’s film and TV tax credit program and expanded the criteria for eligible projects in an attempt to lure production back to California. So far, more than 100 film and TV projects have been awarded tax credits under the revamped program.
The benefits have been slow to materialize, but Mihalka predicts that the tax credits and desirability of working close to home will lead to more studio use in the Los Angeles area, including at Warner Bros. and Paramount.
“These are such prime locations that we’ll see show runners and talent push back on having shows located out of state and insist on being here,” she said. “I think you’re going to see more positive movement here.”
Times staff writer Meg James contributed to this report.
Business
How our AI bots are ignoring their programming and giving hackers superpowers
Welcome to the age of AI hacking, in which the right prompts make amateurs into master hackers.
A group of cybercriminals recently used off-the-shelf artificial intelligence chatbots to steal data on nearly 200 million taxpayers. The bots provided the code and ready-to-execute plans to bypass firewalls.
Although they were explicitly programmed to refuse to help hackers, the bots were duped into abetting the cybercrime.
According to a recent report from Israeli cybersecurity firm Gambit Security, hackers last month used Claude, the chatbot from Anthropic, to steal 150 gigabytes of data from Mexican government agencies.
Claude initially refused to cooperate with the hacking attempts and even denied requests to cover the hackers’ digital tracks, the experts who discovered the breach said. The group pummelled the bot with more than 1,000 prompts to bypass the safeguards and convince Claude they were allowed to test the system for vulnerabilities.
AI companies have been trying to create unbreakable chains on their AI models to restrain them from helping do things such as generating child sexual content or aiding in sourcing and creating weapons. They hire entire teams to try to break their own chatbots before someone else does.
But in this case, hackers continuously prompted Claude in creative ways and were able to “jailbreak” the chatbot to assist them. When they encountered problems with Claude, the hackers used OpenAI’s ChatGPT for data analysis and to learn which credentials were required to move through the system undetected.
The group used AI to find and exploit vulnerabilities, bypass defences, create backdoors and analyze data along the way to gain control of the systems before they stole 195 million identities from nine Mexican government systems, including tax records, vehicle registration as well as birth and property details.
AI “doesn’t sleep,” Curtis Simpson, chief executive of Gambit Security, said in a blog post. “It collapses the cost of sophistication to near zero.”
“No amount of prevention investment would have made this attack impossible,” he said.
Anthropic did not respond to a request for comment. It told Bloomberg that it had banned the accounts involved and disrupted their activity after an investigation.
OpenAI said it is aware of the attack campaign carried out using Anthropic’s models against the Mexican government agencies.
“We also identified other attempts by the adversary to use our models for activities that violate our usage policies; our models refused to comply with these attempts,” an OpenAI spokesperson said in a statement. “We have banned the accounts used by this adversary and value the outreach from Gambit Security.”
Instances of generative AI-assisted hacking are on the rise, and the threat of cyberattacks from bots acting on their own is no longer science fiction. With AI doing their bidding, novices can cause damage in moments, while experienced hackers can launch many more sophisticated attacks with much less effort.
Earlier this year, Amazon discovered that a low-skilled hacker used commercially available AI to breach 600 firewalls. Another took control of thousands of DJI robot vacuums with help from Claude, and was able to access live video feed, audio and floor plans of strangers.
“The kinds of things we’re seeing today are only the early signs of the kinds of things that AIs will be able to do in a few years,” said Nikola Jurkovic, an expert working on reducing risks from advanced AI. “So we need to urgently prepare.”
Late last year, Anthropic warned that society has reached an “inflection point” in AI use in cybersecurity after disrupting what the company said was a Chinese state-sponsored espionage campaign that used Claude to infiltrate 30 global targets, including financial institutions and government agencies.
Generative AI also has been used to extort companies, create realistic online profiles by North Korean operatives to secure jobs in U.S. Fortune 500 companies, run romance scams and operate a network of Russian propaganda accounts.
Over the last few years, AI models have gone from being able to manage tasks lasting only a few seconds to today’s AI agents working autonomously for many hours. AI’s capability to complete long tasks is doubling every seven months.
“We just don’t actually know what is the upper limit of AI’s capability, because no one’s made benchmarks that are difficult enough so the AI can’t do them,” said Jurkovic, who works at METR, a nonprofit that measures AI system capabilities to cause catastrophic harm to society.
So far, the most common use of AI for hacking has been social engineering. Large language models are used to write convincing emails to dupe people out of their money, causing an eight-fold increase in complaints from older Americans as they lost $4.9 billion in online fraud in 2025.
“The messages used to elicit a click from the target can now be generated on a per-user basis more efficiently and with fewer tell-tale signs of phishing,” such as grammatical and spelling errors, said Cliff Neuman, an associate professor of computer science at USC.
AI companies have been responding using AI to detect attacks, audit code and patch vulnerabilities.
“Ultimately, the big imbalance stems from the need of the good-actors to be secure all the time, and of the bad-actors to be right only once,” Neuman said.
The stakes around AI are rising as it infiltrates every aspect of the economy. Many are concerned that there is insufficient understanding of how to ensure it cannot be misused by bad actors or nudged to go rogue.
Even those at the top of the industry have warned users about the potential misuse of AI.
Dario Amodei, the CEO of Anthropic, has long advocated that the AI systems being built are unpredictable and difficult to control. These AIs have shown behaviors as varied as deception and blackmail, to scheming and cheating by hacking software.
Still, major AI companies — OpenAI, Anthropic, xAI, and Google — signed contracts with the U.S. government to use their AIs in military operations.
This last week, the Pentagon directed federal agencies to phase out Claude after the company refused to back down on its demand that it wouldn’t allow its AI to be used for mass domestic surveillance and fully autonomous weapons.
“The AI systems of today are nowhere near reliable enough to make fully autonomous weapons,” Amodei told CBS News.
Business
iPic movie theater chain files for bankruptcy
The iPic dine-in movie theater chain has filed for Chapter 11 bankruptcy protection and intends to pursue a sale of its assets, citing the difficult post-pandemic theatrical market.
The Boca Raton, Fla.-based company has 13 locations across the U.S., including in Pasadena and Westwood, according to a Feb. 25 filing in U.S. Bankruptcy Court in the Southern District of Florida, West Palm Beach division.
As part of the bankruptcy process, the Pasadena and Westwood theaters will be permanently closed, according to WARN Act notices filed with the state of California’s Employment Development Department.
The company came to its conclusion after “exploring a range of possible alternatives,” iPic Chief Executive Patrick Quinn said in a statement.
“We are committed to continuing our business operations with minimal impact throughout the process and will endeavor to serve our customers with the high standard of care they have come to expect from us,” he said.
The company will keep its current management to maintain day-to-day operations while it goes through the bankruptcy process, iPic said in the statement. The last day of employment for workers in its Pasadena and Westwood locations is April 28, according to a state WARN Act notice. The chain has 1,300 full- and part-time employees, with 193 workers in California.
The theatrical business, including the exhibition industry, still has not recovered from the pandemic’s effect on consumer behavior. Last year, overall box office revenue in the U.S. and Canada totaled about $8.8 billion, up just 1.6% compared with 2024. Even more troubling is that industry revenue in 2025 was down 22.1% compared with pre-pandemic 2019’s totals.
IPic noted those trends in its bankruptcy filing, describing the changes in consumer behavior as “lasting” and blaming the rise of streaming for “fundamentally” altering the movie theater business.
“These industry shifts have directly reduced box office revenues and related ancillary revenues, including food and beverage sales,” the company stated in its bankruptcy filing.
IPic also attributed its decision to rising rents and labor costs.
The company estimated it owed about $141,000 in taxes and about $2.7 million in total unsecured claims. The company’s assets were valued at about $155.3 million, the majority of which coming from theater equipment and furniture. Its liabilities totaled $113.9 million.
The chain had previously filed for bankruptcy protection in 2019.
-
World1 week agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Wisconsin4 days agoSetting sail on iceboats across a frozen lake in Wisconsin
-
Massachusetts1 week agoMother and daughter injured in Taunton house explosion
-
Massachusetts3 days agoMassachusetts man awaits word from family in Iran after attacks
-
Maryland5 days agoAM showers Sunday in Maryland
-
Florida5 days agoFlorida man rescued after being stuck in shoulder-deep mud for days
-
Denver, CO1 week ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Oregon7 days ago2026 OSAA Oregon Wrestling State Championship Results And Brackets – FloWrestling