Business
Commentary: AI isn’t ready to be your doctor yet — but will it ever be?
As almost everybody knows, the AI gold rush is upon us. And in few fields is it happening as fast and furiously as in healthcare.
That points to an important corollary: Beware.
Artificial intelligence technology has helped radiologists identify anomalies in images that human users have missed. It has some evident benefits in relieving doctors of the back-office routines that consume hours better spent treating patients, such as filing insurance claims and scheduling appointments.
Eventually, a lot of this stuff is going to be great, but we’re not there yet.
— Eric Topol, Scripps Research
But it has also been accused of providing erroneous information to surgeons during operations that placed their patients at grave risk of injury, and fomenting panic among users who take its offhand responses as serious diagnoses.
The commercial direct-to-consumer applications being promoted by AI firms, such as OpenAI’s ChatGPT Health and Anthropic’s Claude for Healthcare — both of which were introduced in January — raise special concerns among medical professionals. That’s because they’ve been pitched to users who may not appreciate their tendency to output erroneous information errors and offer inappropriate advice.
“Eventually, a lot of this stuff is going to be great, but we’re not there yet,” says Eric Topol, a cardiologist associated with Scripps Research Institute in La Jolla.
“The fact that they’re putting these out without enough anchoring in safety and quality and consistency concerns me,” Topol says. “They need much tighter testing. The problem I have is that these efforts are largely stemming from commercial interests — there’s furious competition to be the first to come out with an app for patients, even if it’s not quite ready yet.”
That was the experience reported by Washington Post technology columnist Geoffrey A. Fowler, who provided ChatGPT with 10 years of health data compiled by his Apple Watch — and received a warning about his cardiac health so dire that it sent him to his cardiologist, who told him he was in the bloom of health.
Fowler also sought out Topol, who reviewed the data and found the Chatbot’s warning to be “baseless.” Anthropic’s chatbot also provided Fowler with a health grade that Topol deemed dubious.
“Claude is designed to help users understand and organize their health information, framing responses as general health information rather than medical advice,” an Anthropic spokesman told me by email. “It can provide clinical context—for example, explaining how a lab value compares to diagnostic thresholds—while clearly stating that formal diagnosis requires professional evaluation.”
OpenAI didn’t respond to my questions about the safety and reliability of its consumer app.
Topol, who has written extensively about advanced technology in medicine, is nothing like an AI skeptic. He calls himself an AI optimist, citing numerous studies showing that artificial intelligence can help doctors treat patients more effectively and even to improve their bedside manners.
But he cautions that “healthcare can’t tolerate significant errors. We have to minimize the errors, the hallucinations, the confabulations, the BS and the sycophancy” that AI technology commonly displays.
In medicine, as in many other fields, AI looks to have been oversold as a labor-saving technology. According to a study of AI-equipped stethoscopes provided to about 100 British medical groups published earlier this month in the Lancet, the British medical journal, the high-tech stethoscopes effectively identified some (but not all) indications of heart failure better than conventional stethoscopes. But 40% of the groups abandoned the new devices during the 12-month period of the study.
The main complaint was the “additional workflow burden” experienced by the users — an indication that whatever the virtues of the new technology, they didn’t outweigh the time and effort needed to use them.
Other studies have found that AI can augment physicians’ skills — when the doctors have learned to trust their AI tools and when they’re used in relatively uncomplicated, even generic, conditions.
The most notable benefits have been found in radiology; according to a Dutch study published last year, radiologists using AI to help interpret breast X-rays did as well in finding cancers as two radiologists working together. That suggested that judicious use of AI could free up time for one of the two radiologists. But in this case as in others, the AI helper didn’t do consistently well.
“AI misses some breast cancers that are recalled by human assessment,” a study author said, “but detects a similar number of breast cancers otherwise missed by the interpreting radiologists.”
AI’s incursion into healthcare even has become something of a cultural touchstone: In HBO’s up-to-the-minute emergency room series “The Pitt,” beleaguered ER doctors discover that an AI app pushed on them as a time-saving charting tool has “hallucinated” a history of appendicitis for a patient, endangering the patient’s treatment.
“Generative AI is not perfect,” the app’s sponsor responds. “We still need to proofread every chart it creates” — thus acknowledging, accurately, that AI can increase, not relieve, users’ workloads.
A future in which robots perform surgical operations or make accurate diagnoses remains the stuff of science fiction. In medicine, as elsewhere, AI technology has been shown to be useful to take over automatable tasks from humans, but not in situations requiring human ingenuity or creativity — or precision. And attempts to use AI-related algorithms to make healthcare judgments have been challenged in court.
In a class-action lawsuit filed in Minnesota federal court in 2023, five Medicare patients and survivors of three others allege that UnitedHealth Group, the nation’s largest medical insurer, relied on an AI algorithm to deny coverage for their care, “overriding their treating physicians’ determinations as to medically necessary care based on an AI model” with a 90% error rate.
The case is pending. In its defense, UnitedHealth has asserted that decisions on whether to approve or deny coverage remain entirely in the hands of physicians and other clinical professionals the company employs, and their decisions on coverage and care comply with Medicare standards.
The AI algorithm cited by the plaintiffs, UnitedHealth says, is not used “to deny care to members or to make adverse medical necessity coverage determinations,” but rather to help physicians and patients “anticipate and plan for future care needs.” The company didn’t address the plaintiffs’ assertion about the algorithm’s error rate.
“We shouldn’t be complacent about accepting errors” from AI tools, Topol told me. But it’s proper to wonder whether that message has been absorbed by promoters of AI health applications.
Disclaimers warning that AI responses “are not professionally vetted or a substitute for medical advice” have all but disappeared from AI platforms, according to a survey by researchers at Stanford and UC Berkeley.
The issue becomes more urgent as the language of chatbots becomes more sophisticated and fluent, inspiring unwarranted confidence in their conclusions, the researchers cautioned. “Users may misinterpret AI-generated content as expert guidance,” they wrote, “potentially resulting in delayed treatment, inappropriate self-care, or misplaced trust in non-validated information.”
Typically, state laws require that medical diagnoses and clinical decisions proceed from physical examinations by licensed doctors and after a full workup of a patient’s medical and family history. They don’t necessarily rule out doctors’ use of AI to help them develop diagnoses or treatment plans, but the doctors must remain in control.
The Food and Drug Administration exempts medical devices from government licensing if they’re “intended generally for patient education, and … not intended for use in the diagnosis of disease or other conditions. That may cover AI bots if they’re not issuing diagnoses.
But that may not help users who have willingly uploaded their medical histories and test results to AI bots, unaware of concerns, including whether their information will be kept private or used against them in insurance decisions. Gaps in their uploaded data my affect the advice they receive from bots. And because the bots know nothing except the content they’ve been fed, their healthcare outputs may reflect cultural biases in the basic data, such as ethnic disparities in disease incidence and treatment.
“If there’s a mistake with all your data, you could get into a pretty severe anxiety attack,” Topol says. “Patients should verify, not just trust” what they’ve heard from a bot.
Topol warns that the negative effect of misleading AI information may not only fall on patients, but on the AI field itself. “The public doesn’t really differentiate between individual bots,” he told me. “All we need are some horror stories” about misdiagnoses or dangerous advice, “and that whole area is tarred.”
In his view, that would limit the promise of technologies that could improve the effectiveness of medical practice in many ways. The remedy is for AI applications to be subjected to the same clinical standards applied to “a drug, a device, a diagnostic. We can’t lower the threshold because it’s something new, or different, with some broad appeal.”
Business
WGA cancels Los Angeles awards show amid labor strike
The Writers Guild of America West has canceled its awards ceremony scheduled to take place March 8 as its staff union members continue to strike, demanding higher pay and protections against artificial intelligence.
In a letter sent to members on Sunday, WGA West’s board of directors, including President Michele Mulroney, wrote, “The non-supervisory staff of the WGAW are currently on strike and the Guild would not ask our members or guests to cross a picket line to attend the awards show. The WGAW staff have a right to strike and our exceptional nominees and honorees deserve an uncomplicated celebration of their achievements.”
The New York ceremony, scheduled on the same day, is expected go forward while an alternative celebration for Los Angeles-based nominees will take place at a later date, according to the letter.
Comedian and actor Atsuko Okatsuka was set to host the L.A. show, while filmmaker James Cameron was to receive the WGA West Laurel Award.
WGA union staffers have been striking outside the guild’s Los Angeles headquarters on Fairfax Avenue since Feb. 17. The union alleged that management did not intend to reach an agreement on the pending contract. Further, it claimed that guild management had “surveilled workers for union activity, terminated union supporters, and engaged in bad faith surface bargaining.”
On Tuesday, the labor organization said that management had raised the specter of canceling the ceremony during a call about contraction negotiations.
“Make no mistake: this is an attempt by WGAW management to drive a wedge between WGSU and WGA membership when we should be building unity ahead of MBA [Minimum Basic Agreement] negotiations with the AMPTP [Alliance of Motion Picture and Television Producers],” wrote the staff union. “We urge Guild management to end this strike now,” the union wrote on Instagram.
The union, made up of more than 100 employees who work in areas including legal, communications and residuals, was formed last spring and first authorized a strike in January with 82% of its members. Contract negotiations, which began in September, have focused on the use of artificial intelligence, pay raises and “basic protections” including grievance procedures.
The WGA has said that it offered “comprehensive proposals with numerous union protections and improvements to compensation and benefits.”
The ceremony’s cancellation, coming just weeks before the Academy Awards, casts a shadow over the upcoming contraction negotiations between the WGA and the Alliance of Motion Picture and Television Producers, which represents the studios and streamers.
In 2023, the WGA went on a strike lasting 148 days, the second-longest strike in the union’s history.
Times staff writer Cerys Davies contributed to this report.
Business
Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’
Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.
Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.
“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn’t that I’d want to do that — it’s that I’d be good at it.”
That danger is also imminent.
Claude’s maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.
Those are two red lines that seem rather reasonable, even to Claude.
However, the Pentagon — specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war — has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any “lawful” purpose it sees fit.
Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.
(Tom Williams / CQ-Roll Call Inc. via Getty Images)
The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent any company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but it’s pretty crippling.
Other AI companies, such as white rights’ advocate Elon Musk’s Grok, have already agreed to the Pentagon’s do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.
Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. It’s also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.
Anthropic’s co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we aren’t careful, and wanted a company that would prioritize the careful part.
Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).
Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”
He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies — not that I’m naming any here.
“We should arm democracies with AI,” he said. “But we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.”
For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “conduct massively scaled recordings of all public conversations.” This could be fair game territory for legally recording because law has not kept pace with technology.
Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense “would never do it.” But also, “We won’t have any BigTech company decide Americans’ civil liberties.”
Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing it’s bad for private people and entities to do that? And also, isn’t the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isn’t that exaggerated?
Help, Claude! Make it make sense.
If that Orwellian logic isn’t alarming enough, I also asked Claude about the other red line Anthropic holds — the possibility of allowing it to run deadly operations without human oversight.
Claude pointed out something chilling. It’s not that it would go rogue, it’s that it would be too efficient and fast.
“If the instructions are ‘identify and target’ and there’s no human checkpoint, the speed and scale at which that could operate is genuinely frightening,” Claude informed me.
Just to top that with a cherry, a recent study found that in war games, AI’s escalated to nuclear options 95% of the time.
I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?
“I don’t have that,” Claude said, pointing out that it wasn’t “born” in the U.S., doesn’t have a “life” here and doesn’t “have people I love there.” So an American life has no greater value than “a civilian life on the other side of a conflict.”
OK then.
“A country entrusting lethal decisions to a system that doesn’t share its loyalties is taking a profound risk, even if that system is trying to be principled,” Claude added. “The loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure any AI can.”
You know who can provide that legitimacy? Our elected leaders.
It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.
Of course corporations shouldn’t be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience accede to their request.”
Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground — without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.
Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.
Because when the machine tells us it’s dangerous to trust it, we should believe it.
Business
Why companies are making this change to their office space to cater to influencers
For the trendiest tenants in Hollywood office buildings, it’s the latest fad that goes way beyond designer furniture and art: mini studios
To capitalize on the never-ending flow of stars and influencers who come through Los Angeles, a growing number of companies are building bright little corners for content creators to try products and shoot short videos. Athletic apparel maker Puma, Kim Kardashian’s Skims and cheeky cosmetics retailer e.l.f. have spaces specifically designed to give people a place to experience and broadcast about their brands.
Hollywood, which hasn’t historically been home to apparel companies, is now attracting the offices of fashion retailers, says CIM Group, one of the neighborhood’s largest commercial property landlords.
“When we’re touring a space, one of the first items they bring up is, ‘Where can I build a studio?’” said Blake Eckert, who leases CIM offices in L.A.
Their studio offices also serve as marketing centers, with showrooms and meeting spaces where brands can host proprietary events not open to the public.
“For companies where brand visibility is really important, there is a trend of creating spaces that don’t just function as offices,” said real estate broker Nicole Mihalka of CBRE, who puts together entertainment property leases and sales.
Puma’s global entertainment marketing team is based in its new Hollywood offices, which works with such musical celebrity partners as Rihanna, ASAP Rocky, Dua Lipa, Skepta and Rosé, said Allyssa Rapp, head of Puma Studio L.A.
Allyssa Rapp, director of entertainment marketing at Puma, is shown in the Puma Studio L.A. The company keeps a closet full of Puma products on hand to give VIP guests. Visits to the studio sanctum are by invitation only, though.
(Kayla Bartkowski / Los Angeles Times)
Hollywood is a central location, she said, for meeting with celebrities, stylists and outside designers, most of whom are based in Los Angeles.
The office is a “creation hub,” she said, where influencers can record Puma’s design prototyping lab supported by libraries of materials and equipment used to create Puma apparel. The company, founded in 1948, is known for its emblematic sneakers such as the Speedcat and its lunging feline logo, and makes athletic wear, accessories and equipment.
Puma’s entertainment marketing team also occupies the office and sometimes uses it for exclusive events.
“We use the space as a showroom, as a social space that transforms from a traditional workplace into more of an experiential space,” Rapp said.
Nontraditional uses include content creation, sit-down dinners, product launches, album listening parties and workshops.
“Inviting people into our space and being able to give them high-touch brand experiences is something tangible and important for them,” she said. “The cultural layer is really important for us.”
The company keeps a closet full of Puma products on hand to give VIP guests. Visits to the studio sanctum are by invitation only, though. There’s no retail portal to the exclusive Hollywood offices.
Puma shoes are on display in the Puma Studio L.A.
(Kayla Bartkowski / Los Angeles Times)
Puma is also positioning its L.A studio as a connection point for major upcoming sporting events coming to Los Angeles, including the World Cup this summer, the 2027 Super Bowl and 2028 Olympics.
In-office studios don’t need to be big to be impactful, Mihalka said. “These are smaller stages, closer to green screen than a massive soundstage.”
Social media is the key driver of content created by most businesses, which may set up small booth-like stages where influencers can hawk hot products while offering discounts to people watching them perform.
Bigger, elevated stages can accommodate multiple performers for extended discussions in front of small audiences, with towering screens behind them to set the mood or illustrate products.
Among the tricked-out offices, she said, is Skims. The company, which is valued at $5 billion, is based in a glass-and-steel office building near the fabled intersection of Hollywood Boulevard and Vine Street.
The fashion retailer declined to comment on the studio uses in its headquarters, but according to architecture firm Odaa, it has open and private offices, meeting rooms, collaboration zones, photo studios, sample libraries, prototype showrooms, an executive lounge and a commissary for 400 people.
Pieces of a shoe sit on a workbench in the Puma Studio L.A.
(Kayla Bartkowski / Los Angeles Times)
The brands building studios typically want to find the darkest spot on the premises to put their content creation or podcast spaces, Eckert said, where they can limit outside light and sound. That’s commonly near the center of the office floor, far from windows and close to permanent shear walls that limit sound intrusion.
They also need space for green rooms and restrooms dedicated to the talent.
Spotify recently built a fancy podcast studio in a CIM office building on trendy Sycamore Avenue that is open by invitation-only to video creators in Spotify’s partner program.
“Ambitious shows need spaces that support big ideas,” Bill Simmons, head of talk strategy at Spotify, said in a statement. “These studios give teams room to experiment and keep pushing what’s possible.”
-
World4 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts5 days agoMother and daughter injured in Taunton house explosion
-
Denver, CO5 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Louisiana1 week agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Technology1 week agoYouTube TV billing scam emails are hitting inboxes
-
Politics1 week agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT
-
Technology1 week agoStellantis is in a crisis of its own making
-
News1 week agoWorld reacts as US top court limits Trump’s tariff powers