Business
One of California’s first labor fights over AI is playing out at Kaiser
Workers of one of the most powerful unions in California are forming an early front in the battle against artificial intelligence, warning it could take jobs and harm people’s health.
As part of their negotiations with their employer, Kaiser Permanente workers have been pushing back against the giant healthcare provider’s use of AI. They are building demands around the issue and others, using picket lines and hunger strikes to help persuade Kaiser to use the powerful technology responsibly.
Kaiser says AI could save employees from tedious, time-consuming tasks such as taking notes and paperwork. Workers say that could be the first step down a slippery slope that leads to layoffs and damage to patient health.
“They’re sort of painting a map that would reduce their need for human workers and human clinicians,” said Ilana Marcucci-Morris, a licensed clinical social worker and part of the bargaining team for the National Union of Healthcare Workers, which is fighting for more protections against AI
The 42-year-old Oakland-based therapist says she knows technology can be useful but warns that the consequences for patients have been “grave” when AI makes mistakes.
Kaiser says AI can help physicians and employees focus on serving members and patients.
“AI does not replace human assessment and care,” Kaiser spokesperson Candice Lee said in an email. “Artificial intelligence holds significant potential to benefit healthcare by supporting better diagnostics, enhancing patient-clinician relationships, optimizing clinicians’ time, and ensuring fairness in care experiences and health outcomes by addressing individual needs.”
AI fears are shaking up industries across the country.
Medical administrative assistants are among the most exposed to AI, according to a recent study by Brookings and the Centre for the Governance of AI. The assistants do the type of work that AI is getting better at. Meanwhile, they are less likely to have the skills or support needed to transition to new jobs, the study said.
There are millions of other jobs that are among the most vulnerable to AI, such as office clerks, insurance sales agents and translators, according to the research released last month.
In California, labor unions this week urged Gov. Gavin Newsom and lawmakers to pass more legislation to protect workers from AI. The California Federation of Labor Unions has sponsored a package of bills to address AI’s risks, including job loss and surveillance.
The technology “threatens to eviscerate workers’ rights and cause widespread job loss,” the group said in a joint letter with AFL-CIO leaders in different states.
Kaiser Permanente is California’s largest private employer, with close to 19,000 physicians and more than 180,000 employees . It has a major presence in Washington, Colorado, Georgia, Hawaii and other states.
The National Union of Healthcare Workers, which represents Kaiser employees, has been among the earliest to recognize and respond to the encroachment of AI into the workplace. As it has negotiated for better pay and working conditions, the use of AI has also become an important new point of discussion between workers and management.
Kaiser already uses AI software to transcribe conversations and take notes between healthcare workers and patients, but therapists have privacy concerns about recording highly sensitive remarks. The company also uses AI to predict when hospitalized patients might become more ill. It offers mental health apps for enrollees, including at least one with an AI chatbot.
Last year, Kaiser mental health workers held a hunger strike in Los Angeles to demand the healthcare provider improve its mental health services and patient care.
The union ratified a new contract covering 2,400 mental health and addiction medicine employees in Southern California last year, but negotiations continue for Marcucci-Morris and other Northern California mental health workers. They want Kaiser to pledge that AI will be used only to assist, but not replace, workers.
Kaiser said it’s still bargaining with the union.
“We don’t know what the future holds, but our proposal would commit us to bargain if there are changes to working conditions due to any new AI technologies,” Lee said.
Healthcare providers have also faced lawsuits over the use of AI tools to record conversations between doctors and patients. A November lawsuit, filed in San Diego County Superior Court, alleged Sharp HealthCare used an AI note-taking software called Abridge to illegally record doctor-patient conversations without consent.
Sharp HealthCare said it protects patients’ privacy and does not use AI tools during therapy sessions.
Some Kaiser doctors and clinicians, including therapists, use Abridge to take notes during patient visits. Kaiser Permanente Ventures, its venture capital arm, has invested in Abridge.
The healthcare provider said, “Investment decisions are distinctly separate from other decisions made by Kaiser Permanente.”
Close to half of Kaiser behavioral health professionals in Northern California said they are uncomfortable with the introduction of AI tools, including Abridge, in their clinical practice, according to their union.
The provider said that its workers review the AI-generated notes for accuracy and get patient consent, and that the recordings and transcripts are encrypted. Data are “stored and processed in approved, compliant environments for up to 14 days before becoming permanently deleted.”
Lawmakers and mental health professionals are exploring other ways to restrict the use of AI in mental healthcare.
The California Psychological Assn. is trying to push through legislation to protect patients from AI. It joined others to back a bill requiring clear, written consent before a client’s therapy session is recorded or transcribed.
The bill also prohibits individuals or companies, including those using AI, from offering therapy in California without a licensed professional.
State Sen. Steve Padilla (D-Chula Vista), who introduced the bill, said there need to be more rules around the use of AI.
“This technology is powerful. It’s ubiquitous. It’s evolving quickly,” he said. “That means you have a limited window to make sure we get in there and put the right guardrails in place.”
Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center, said that people are using AI chatbots for advice on how to approach difficult conversations, not necessarily to replace therapy, but that more research is still needed.
He’s working with the National Alliance on Mental Illness to develop benchmarks so people understand how different AI tools respond to mental health.
Healthcare workers say they are worried about what they are already seeing can happen when people struggling with mental health issues interact too much with AI chatbots.
AI chatbots such as OpenAI’s ChatGPT aren’t licensed or designed to be therapists and can’t replace professional mental healthcare. Still, some teenagers and adults have been turning to chatbots to share their personal struggles. People have long been using Google to deal with physical and mental health issues, but AI can seem more powerful because it delivers what looks like a diagnosis and a solution with confidence in a conversation.
Parents whose children died by suicide after talking to chatbots have sued California AI companies Character.AI and OpenAI, alleging the platforms provided content that harmed the mental health of young people and discussed suicide methods.
“They are not trained to respond as a human would respond,” said Dr. Dustin Weissman, president of the California Psychological Assn. “A lot of those nuances can fall through the cracks, and because of that, it could lead to catastrophic outcomes.”
To be sure, some users are finding value and even what feels like companionship in conversations with chatbots about their mental health and other issues.
Indeed, some say the AI bots have given them easier access to mental health tips and help them work through thoughts and feelings in a conversational style that might otherwise require an appointment with a therapist and hundreds of dollars.
Roughly 12% of adults are likely to use AI chatbots for mental healthcare in the next six months and 1% already do, according to a NAMI/Ipsos survey conducted in November.
But for mental health workers like Marcucci-Morris, AI by itself is not enough.
“AI is not the savior,” she said.
Business
After Warner Bros. merger, changes are coming to the historic Paramount lot. Here’s what to expect
With Paramount Skydance’s acquisition of Warner Bros. expected to saddle the combined company with $79 billion in debt, Paramount executives are looking to do away with redundant assets including real estate — and there is a lot of that.
Chief in the public’s imagination are their historic studios in Burbank and Hollywood, where legendary films and television show have been made for generations and continue to operate year-round.
“Both of these studios are in the core [30-mile zone,] the inner circle of where Hollywood talent wants to be,” entertainment property broker Nicole Mihalka of CBRE said. “It’s very prime real estate.”
When Sony and Apollo were bidding for Paramount in early 2024, their plan was to sell the Paramount property, but there is no indication that Paramount would part with its namesake lot.
For now, Paramount’s plan is to keep both studios operating with each studio releasing about 15 films a year, but the goal is to eventually consolidate most of the studio operations around the Warner Bros. lot in Burbank in order to to eliminate redundancies with the Paramount lot on Melrose Avenue, people close to Chief Executive David Ellison said.
A view of the Warner Bros. Studios water tower Feb. 23, 2026, in Burbank.
(Eric Thayer / Los Angeles Times)
Paramount would not look to raze its celebrated studio lot — the oldest operating film studio in Los Angeles — because of various restrictions on historic buildings there. Paramount also has a relatively new post-production facility on site and will likely need to the studio space.
Instead, the plan would be to lease out space for film productions, including those from combined Paramount-HBO streaming operations. Ellison also is considering plans to develop other parts of the 65-acre site for possible retail use, as well as renting space for commercial offices.
The studios’ combined property holdings are vast, and real estate data provider CoStar estimates they have about 12 million square feet of overlapping uses, including their studio campuses, offices and long-term leases in such film centers as Burbank, Hollywood and New York.
Century-old Paramount Pictures Studios is awash in Hollywood history — think Gloria Swanson as Norma Desmond desperately trying to enter its famous gate in “Sunset Boulevard,” and other classics such as “The Godfather,” “Titanic” and “Breakfast at Tiffany’s.”
The lot, however, is a congested warren of stages, offices, trailers and support facilities such as woodworking mills that date to the early 20th century. The layout is byzantine in part because Paramount bought the former rival RKO studio lot from Desilu Productions to create the lot known today.
Warner Bros. occupies 11 million square feet and owns 14 properties totaling 9.5 million square feet, largely in the United States and United Kingdom, CoStar said. About 3 million square feet of that commercial property is in the Los Angeles area.
The firm’s portfolio also includes the sprawling Warner Bros. Studios Leavesden complex in the U.K. and Turner Broadcasting System headquarters in Atlanta.
Paramount Skydance occupies 8 million square feet and owns 14 properties totaling 2.1 million square feet, according to CoStar. In addition to its Hollywood campus, Paramount’s holdings include prominent buildings in New York such as the Ed Sullivan Theater and CBS Broadcast Center.
Warner Bros. operates a 3-million-square-foot lot in Burbank with more than 30 soundstages — along with space for building sets and backlot areas — where famous movies including “Casablanca” and television shows such as “Friends” were filmed. Paramount’s 1.2-million-square-foot Melrose campus anchors a broader network of owned and leased production space, CoStar said.
Paramount’s lot is already cleared for more development. More than a decade ago, Paramount secured city approval to add 1.4 million square feet to its headquarters and some adjacent properties owned by the company.
The redevelopment plan, valued at $700 million in 2016, underwent years of environmental review and public outreach with neighbors and local business owners.
The plan would allow for construction of up to 1.9 million square feet of new stage, production office, support, office, and retail uses, and the removal of up to 537,600 square feet of existing stage, production office, support, office, and retail uses, for a net increase of nearly 1.4 million square feet.
The proposal preserves elements of the past by focusing future development on specific portions of the lot along Melrose and limited areas in the production core, architecture firm Rios said.
The Warner Bros. and Paramount lots “are two of the most prime pieces of real estate in the country,” Mihalka said. “These are legacy assets with a lot of potential to be [tourist] attractions in addition to working studios.”
Hollywood is still reeling from previous mergers, in addition to a sharp pullback in film and television production locally as filmmakers chase tax credits offered overseas and in other states, including New York and New Jersey.
Last year, lawmakers boosted the annual amount allocated to the state’s film and TV tax credit program and expanded the criteria for eligible projects in an attempt to lure production back to California. So far, more than 100 film and TV projects have been awarded tax credits under the revamped program.
The benefits have been slow to materialize, but Mihalka predicts that the tax credits and desirability of working close to home will lead to more studio use in the Los Angeles area, including at Warner Bros. and Paramount.
“These are such prime locations that we’ll see show runners and talent push back on having shows located out of state and insist on being here,” she said. “I think you’re going to see more positive movement here.”
Times staff writer Meg James contributed to this report.
Business
How our AI bots are ignoring their programming and giving hackers superpowers
Welcome to the age of AI hacking, in which the right prompts make amateurs into master hackers.
A group of cybercriminals recently used off-the-shelf artificial intelligence chatbots to steal data on nearly 200 million taxpayers. The bots provided the code and ready-to-execute plans to bypass firewalls.
Although they were explicitly programmed to refuse to help hackers, the bots were duped into abetting the cybercrime.
According to a recent report from Israeli cybersecurity firm Gambit Security, hackers last month used Claude, the chatbot from Anthropic, to steal 150 gigabytes of data from Mexican government agencies.
Claude initially refused to cooperate with the hacking attempts and even denied requests to cover the hackers’ digital tracks, the experts who discovered the breach said. The group pummelled the bot with more than 1,000 prompts to bypass the safeguards and convince Claude they were allowed to test the system for vulnerabilities.
AI companies have been trying to create unbreakable chains on their AI models to restrain them from helping do things such as generating child sexual content or aiding in sourcing and creating weapons. They hire entire teams to try to break their own chatbots before someone else does.
But in this case, hackers continuously prompted Claude in creative ways and were able to “jailbreak” the chatbot to assist them. When they encountered problems with Claude, the hackers used OpenAI’s ChatGPT for data analysis and to learn which credentials were required to move through the system undetected.
The group used AI to find and exploit vulnerabilities, bypass defences, create backdoors and analyze data along the way to gain control of the systems before they stole 195 million identities from nine Mexican government systems, including tax records, vehicle registration as well as birth and property details.
AI “doesn’t sleep,” Curtis Simpson, chief executive of Gambit Security, said in a blog post. “It collapses the cost of sophistication to near zero.”
“No amount of prevention investment would have made this attack impossible,” he said.
Anthropic did not respond to a request for comment. It told Bloomberg that it had banned the accounts involved and disrupted their activity after an investigation.
OpenAI said it is aware of the attack campaign carried out using Anthropic’s models against the Mexican government agencies.
“We also identified other attempts by the adversary to use our models for activities that violate our usage policies; our models refused to comply with these attempts,” an OpenAI spokesperson said in a statement. “We have banned the accounts used by this adversary and value the outreach from Gambit Security.”
Instances of generative AI-assisted hacking are on the rise, and the threat of cyberattacks from bots acting on their own is no longer science fiction. With AI doing their bidding, novices can cause damage in moments, while experienced hackers can launch many more sophisticated attacks with much less effort.
Earlier this year, Amazon discovered that a low-skilled hacker used commercially available AI to breach 600 firewalls. Another took control of thousands of DJI robot vacuums with help from Claude, and was able to access live video feed, audio and floor plans of strangers.
“The kinds of things we’re seeing today are only the early signs of the kinds of things that AIs will be able to do in a few years,” said Nikola Jurkovic, an expert working on reducing risks from advanced AI. “So we need to urgently prepare.”
Late last year, Anthropic warned that society has reached an “inflection point” in AI use in cybersecurity after disrupting what the company said was a Chinese state-sponsored espionage campaign that used Claude to infiltrate 30 global targets, including financial institutions and government agencies.
Generative AI also has been used to extort companies, create realistic online profiles by North Korean operatives to secure jobs in U.S. Fortune 500 companies, run romance scams and operate a network of Russian propaganda accounts.
Over the last few years, AI models have gone from being able to manage tasks lasting only a few seconds to today’s AI agents working autonomously for many hours. AI’s capability to complete long tasks is doubling every seven months.
“We just don’t actually know what is the upper limit of AI’s capability, because no one’s made benchmarks that are difficult enough so the AI can’t do them,” said Jurkovic, who works at METR, a nonprofit that measures AI system capabilities to cause catastrophic harm to society.
So far, the most common use of AI for hacking has been social engineering. Large language models are used to write convincing emails to dupe people out of their money, causing an eight-fold increase in complaints from older Americans as they lost $4.9 billion in online fraud in 2025.
“The messages used to elicit a click from the target can now be generated on a per-user basis more efficiently and with fewer tell-tale signs of phishing,” such as grammatical and spelling errors, said Cliff Neuman, an associate professor of computer science at USC.
AI companies have been responding using AI to detect attacks, audit code and patch vulnerabilities.
“Ultimately, the big imbalance stems from the need of the good-actors to be secure all the time, and of the bad-actors to be right only once,” Neuman said.
The stakes around AI are rising as it infiltrates every aspect of the economy. Many are concerned that there is insufficient understanding of how to ensure it cannot be misused by bad actors or nudged to go rogue.
Even those at the top of the industry have warned users about the potential misuse of AI.
Dario Amodei, the CEO of Anthropic, has long advocated that the AI systems being built are unpredictable and difficult to control. These AIs have shown behaviors as varied as deception and blackmail, to scheming and cheating by hacking software.
Still, major AI companies — OpenAI, Anthropic, xAI, and Google — signed contracts with the U.S. government to use their AIs in military operations.
This last week, the Pentagon directed federal agencies to phase out Claude after the company refused to back down on its demand that it wouldn’t allow its AI to be used for mass domestic surveillance and fully autonomous weapons.
“The AI systems of today are nowhere near reliable enough to make fully autonomous weapons,” Amodei told CBS News.
Business
iPic movie theater chain files for bankruptcy
The iPic dine-in movie theater chain has filed for Chapter 11 bankruptcy protection and intends to pursue a sale of its assets, citing the difficult post-pandemic theatrical market.
The Boca Raton, Fla.-based company has 13 locations across the U.S., including in Pasadena and Westwood, according to a Feb. 25 filing in U.S. Bankruptcy Court in the Southern District of Florida, West Palm Beach division.
As part of the bankruptcy process, the Pasadena and Westwood theaters will be permanently closed, according to WARN Act notices filed with the state of California’s Employment Development Department.
The company came to its conclusion after “exploring a range of possible alternatives,” iPic Chief Executive Patrick Quinn said in a statement.
“We are committed to continuing our business operations with minimal impact throughout the process and will endeavor to serve our customers with the high standard of care they have come to expect from us,” he said.
The company will keep its current management to maintain day-to-day operations while it goes through the bankruptcy process, iPic said in the statement. The last day of employment for workers in its Pasadena and Westwood locations is April 28, according to a state WARN Act notice. The chain has 1,300 full- and part-time employees, with 193 workers in California.
The theatrical business, including the exhibition industry, still has not recovered from the pandemic’s effect on consumer behavior. Last year, overall box office revenue in the U.S. and Canada totaled about $8.8 billion, up just 1.6% compared with 2024. Even more troubling is that industry revenue in 2025 was down 22.1% compared with pre-pandemic 2019’s totals.
IPic noted those trends in its bankruptcy filing, describing the changes in consumer behavior as “lasting” and blaming the rise of streaming for “fundamentally” altering the movie theater business.
“These industry shifts have directly reduced box office revenues and related ancillary revenues, including food and beverage sales,” the company stated in its bankruptcy filing.
IPic also attributed its decision to rising rents and labor costs.
The company estimated it owed about $141,000 in taxes and about $2.7 million in total unsecured claims. The company’s assets were valued at about $155.3 million, the majority of which coming from theater equipment and furniture. Its liabilities totaled $113.9 million.
The chain had previously filed for bankruptcy protection in 2019.
-
World1 week agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Wisconsin4 days agoSetting sail on iceboats across a frozen lake in Wisconsin
-
Massachusetts1 week agoMother and daughter injured in Taunton house explosion
-
Massachusetts3 days agoMassachusetts man awaits word from family in Iran after attacks
-
Maryland5 days agoAM showers Sunday in Maryland
-
Florida5 days agoFlorida man rescued after being stuck in shoulder-deep mud for days
-
Denver, CO1 week ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Oregon7 days ago2026 OSAA Oregon Wrestling State Championship Results And Brackets – FloWrestling