Connect with us

Business

Commentary: AI isn’t ready to be your doctor yet — but will it ever be?

Published

on

Commentary: AI isn’t ready to be your doctor yet — but will it ever be?

As almost everybody knows, the AI gold rush is upon us. And in few fields is it happening as fast and furiously as in healthcare.

That points to an important corollary: Beware.

Artificial intelligence technology has helped radiologists identify anomalies in images that human users have missed. It has some evident benefits in relieving doctors of the back-office routines that consume hours better spent treating patients, such as filing insurance claims and scheduling appointments.

Eventually, a lot of this stuff is going to be great, but we’re not there yet.

— Eric Topol, Scripps Research

Advertisement

But it has also been accused of providing erroneous information to surgeons during operations that placed their patients at grave risk of injury, and fomenting panic among users who take its offhand responses as serious diagnoses.

The commercial direct-to-consumer applications being promoted by AI firms, such as OpenAI’s ChatGPT Health and Anthropic’s Claude for Healthcare — both of which were introduced in January — raise special concerns among medical professionals. That’s because they’ve been pitched to users who may not appreciate their tendency to output erroneous information errors and offer inappropriate advice.

“Eventually, a lot of this stuff is going to be great, but we’re not there yet,” says Eric Topol, a cardiologist associated with Scripps Research Institute in La Jolla.

“The fact that they’re putting these out without enough anchoring in safety and quality and consistency concerns me,” Topol says. “They need much tighter testing. The problem I have is that these efforts are largely stemming from commercial interests — there’s furious competition to be the first to come out with an app for patients, even if it’s not quite ready yet.”

Advertisement

That was the experience reported by Washington Post technology columnist Geoffrey A. Fowler, who provided ChatGPT with 10 years of health data compiled by his Apple Watch — and received a warning about his cardiac health so dire that it sent him to his cardiologist, who told him he was in the bloom of health.

Fowler also sought out Topol, who reviewed the data and found the Chatbot’s warning to be “baseless.” Anthropic’s chatbot also provided Fowler with a health grade that Topol deemed dubious.

“Claude is designed to help users understand and organize their health information, framing responses as general health information rather than medical advice,” an Anthropic spokesman told me by email. “It can provide clinical context—for example, explaining how a lab value compares to diagnostic thresholds—while clearly stating that formal diagnosis requires professional evaluation.”

OpenAI didn’t respond to my questions about the safety and reliability of its consumer app.

Topol, who has written extensively about advanced technology in medicine, is nothing like an AI skeptic. He calls himself an AI optimist, citing numerous studies showing that artificial intelligence can help doctors treat patients more effectively and even to improve their bedside manners.

Advertisement

But he cautions that “healthcare can’t tolerate significant errors. We have to minimize the errors, the hallucinations, the confabulations, the BS and the sycophancy” that AI technology commonly displays.

In medicine, as in many other fields, AI looks to have been oversold as a labor-saving technology. According to a study of AI-equipped stethoscopes provided to about 100 British medical groups published earlier this month in the Lancet, the British medical journal, the high-tech stethoscopes effectively identified some (but not all) indications of heart failure better than conventional stethoscopes. But 40% of the groups abandoned the new devices during the 12-month period of the study.

The main complaint was the “additional workflow burden” experienced by the users — an indication that whatever the virtues of the new technology, they didn’t outweigh the time and effort needed to use them.

Other studies have found that AI can augment physicians’ skills — when the doctors have learned to trust their AI tools and when they’re used in relatively uncomplicated, even generic, conditions.

The most notable benefits have been found in radiology; according to a Dutch study published last year, radiologists using AI to help interpret breast X-rays did as well in finding cancers as two radiologists working together. That suggested that judicious use of AI could free up time for one of the two radiologists. But in this case as in others, the AI helper didn’t do consistently well.

Advertisement

“AI misses some breast cancers that are recalled by human assessment,” a study author said, “but detects a similar number of breast cancers otherwise missed by the interpreting radiologists.”

AI’s incursion into healthcare even has become something of a cultural touchstone: In HBO’s up-to-the-minute emergency room series “The Pitt,” beleaguered ER doctors discover that an AI app pushed on them as a time-saving charting tool has “hallucinated” a history of appendicitis for a patient, endangering the patient’s treatment.

“Generative AI is not perfect,” the app’s sponsor responds. “We still need to proofread every chart it creates” — thus acknowledging, accurately, that AI can increase, not relieve, users’ workloads.

A future in which robots perform surgical operations or make accurate diagnoses remains the stuff of science fiction. In medicine, as elsewhere, AI technology has been shown to be useful to take over automatable tasks from humans, but not in situations requiring human ingenuity or creativity — or precision. And attempts to use AI-related algorithms to make healthcare judgments have been challenged in court.

In a class-action lawsuit filed in Minnesota federal court in 2023, five Medicare patients and survivors of three others allege that UnitedHealth Group, the nation’s largest medical insurer, relied on an AI algorithm to deny coverage for their care, “overriding their treating physicians’ determinations as to medically necessary care based on an AI model” with a 90% error rate.

Advertisement

The case is pending. In its defense, UnitedHealth has asserted that decisions on whether to approve or deny coverage remain entirely in the hands of physicians and other clinical professionals the company employs, and their decisions on coverage and care comply with Medicare standards.

The AI algorithm cited by the plaintiffs, UnitedHealth says, is not used “to deny care to members or to make adverse medical necessity coverage determinations,” but rather to help physicians and patients “anticipate and plan for future care needs.” The company didn’t address the plaintiffs’ assertion about the algorithm’s error rate.

“We shouldn’t be complacent about accepting errors” from AI tools, Topol told me. But it’s proper to wonder whether that message has been absorbed by promoters of AI health applications.

Disclaimers warning that AI responses “are not professionally vetted or a substitute for medical advice” have all but disappeared from AI platforms, according to a survey by researchers at Stanford and UC Berkeley.

The issue becomes more urgent as the language of chatbots becomes more sophisticated and fluent, inspiring unwarranted confidence in their conclusions, the researchers cautioned. “Users may misinterpret AI-generated content as expert guidance,” they wrote, “potentially resulting in delayed treatment, inappropriate self-care, or misplaced trust in non-validated information.”

Advertisement

Typically, state laws require that medical diagnoses and clinical decisions proceed from physical examinations by licensed doctors and after a full workup of a patient’s medical and family history. They don’t necessarily rule out doctors’ use of AI to help them develop diagnoses or treatment plans, but the doctors must remain in control.

The Food and Drug Administration exempts medical devices from government licensing if they’re “intended generally for patient education, and … not intended for use in the diagnosis of disease or other conditions. That may cover AI bots if they’re not issuing diagnoses.

But that may not help users who have willingly uploaded their medical histories and test results to AI bots, unaware of concerns, including whether their information will be kept private or used against them in insurance decisions. Gaps in their uploaded data my affect the advice they receive from bots. And because the bots know nothing except the content they’ve been fed, their healthcare outputs may reflect cultural biases in the basic data, such as ethnic disparities in disease incidence and treatment.

“If there’s a mistake with all your data, you could get into a pretty severe anxiety attack,” Topol says. “Patients should verify, not just trust” what they’ve heard from a bot.

Topol warns that the negative effect of misleading AI information may not only fall on patients, but on the AI field itself. “The public doesn’t really differentiate between individual bots,” he told me. “All we need are some horror stories” about misdiagnoses or dangerous advice, “and that whole area is tarred.”

Advertisement

In his view, that would limit the promise of technologies that could improve the effectiveness of medical practice in many ways. The remedy is for AI applications to be subjected to the same clinical standards applied to “a drug, a device, a diagnostic. We can’t lower the threshold because it’s something new, or different, with some broad appeal.”

Business

California tech company Cloudflare to lay off more than 1,000 workers, cites AI

Published

on

California tech company Cloudflare to lay off more than 1,000 workers, cites AI

Cloudflare is laying off 20% of its staff, the latest technology company to announce big cuts as it uses more artificial intelligence-powered tools.

The San Francisco web performance and cybersecurity company said it was getting rid of 1,100 people.

“The way we work at Cloudflare has fundamentally changed,” Chief Executive Matthew Prince and Chief Operating Officer Michelle Zatlyn told employees in an e-mail. “We don’t just build and sell AI tools and platforms. We are our own most demanding customer.”

It is the latest tech company this week to announce massive layoffs as tech workers embrace the use of AI agents to perform tasks such as generating code more quickly. Coinbase said Tuesday that it would cut 14% of its workforce, or roughly 700 workers. PayPal is reportedly planning to slash 20% of its staff.

Other companies such as Meta, Block and Oracle have announced layoffs this year. From January to April, U.S. tech employers announced 85,411 job cuts, up 33% from the same period last year, outplacement and executive coaching firm Challenger, Gray & Christmas said Thursday.

Advertisement

Cloudflare’s email, which was published on its blog, said that in the last three months, its use of AI has jumped more than 600%. Employees in various roles in engineering, HR, finance and marketing are running “thousands of AI agent sessions each day to get their work done,” and the company has to be “intentional” as it prepares for the “agentic AI era,” the email said.

Cloudflare executives added that the company is hoping to avoid further major layoffs.

“We are making these changes now because making smaller, repeated cuts or dragging a reorganization out over multiple quarters creates prolonged emotional uncertainty for employees and stalls our ability to build,” the email said.

The company estimates that severance and other restructuring will cost between $140 million and $150 million for 2026.

Cloudflare didn’t say how many of those cuts will be in its San Francisco office. The company has offices in other parts of the world, including Asia, Europe and the Middle East, according to its website.

Advertisement

As of December, Cloudflare had 5,156 employees.

Cloudflare announced job cuts the same day it reported its first-quarter earnings. The company’s revenue jumped 34% year-over-year to $639.8 million in the first quarter. It posted a net loss of $22.9 million.

But the company’s forecast for the second quarter fell short of Wall Street’s expectations. Cloudflare projected revenue of $664 million to $665 million for the second quarter, which was lower than the $666 million Wall Street anticipated.

Cloudflare’s stock dropped roughly 18% to $209 per share in after-hours trading.

Advertisement
Continue Reading

Business

Why Stocks and Bonds Are Responding Differently to the Iran War

Published

on

Why Stocks and Bonds Are Responding Differently to the Iran War

The unique global status of the U.S. dollar and financial markets, and the strength of the U.S. economy, have enabled the government to retain its current rating. “A large, dynamic economy, the dollar’s reserve-currency role and the depth and liquidity of U.S. capital markets are key sovereign rating strengths,” Fitch said. But a variety of “governance” issues under the Trump administration, as well as the conflict in the Middle East, along with persistent and widening budget deficits, have challenged that credit rating.

Nonetheless, U.S. Treasuries have attracted global investors as a “safe haven” during the conflict. Other countries, like Britain, don’t have that status now. British 30-year government bonds, known as gilts, have reached their highest level since 1998. And Britain’s benchmark 10-year bond yield was close to 5 percent, a premium of more than 0.6 percentage points above the equivalent Treasury.

Major world central banks have responded defensively to these financial storms. As I wrote last week, the Bank of Japan, European Central Bank, Bank of England and Federal Reserve have all decided to take no action on their key interest rates because of the dual risks posed by rising oil prices resulting from the war with Iran: There are heightened risks of both runaway inflation and throttled economic growth.

That dilemma continues. Kevin M. Warsh, nominated to succeed Jerome H. Powell as Federal Reserve chair, has spoken frequently of the need to trim interest rates but the markets are skeptical. They project no Fed action on rates through December 2027 as the most likely outcome, with a greater possibility of interest rate increases than of reductions, according to futures prices tracked by CME FedWatch.

In short, central banks, which control the shortest-duration interest rates, and the bond market, which sets longer rates, view the economic environment with a jaundiced eye. There is a range of possibilities, from prosperity in many developed markets to chaos if the conflict in the Middle East widens. Fixed-income markets tend to focus on risks more than on the potential for windfall profits that the stock market cherishes.

Advertisement
Continue Reading

Business

Commentary: Blame gas stations — and yourself — for the rise and fall of gas prices

Published

on

Commentary: Blame gas stations — and yourself — for the rise and fall of gas prices

Here’s the name for an economic phenomenon that consumers are going to be hearing a lot more in the coming weeks and months:

It’s the rocket-and-feathers hypothesis, which concerns why gasoline prices rise so quickly (i.e., like a rocket) when oil prices surge and drift downward oh so slowly (like feathers) when crude prices come back to earth.

The pattern is certain to become ever more obvious as oil prices continue to oscillate in response to President Trump’s Iran war and the effect of constrictions in the volume of crude moving through the Strait of Hormuz.

The evidence … supports the common belief that retail gasoline prices respond more quickly to increases in crude oil prices than to decreases.

— Borenstein et al (1997)

Advertisement

The price of crude oil, which had settled at about $60 a barrel before Trump ratcheted up his anti-Iran rhetoric in February, has reached as high as about $113 after the conflict began, but fell below $96 during the day Wednesday as talk emerged of a possible peace deal.

Meanwhile, the average price of gasoline has soared relentlessly, reaching a nationwide average Wednesday of about $4.54 per gallon of regular, according to AAA. That’s up 12 cents from a month ago and higher by $1.38 from a year ago. So the pace at which pump prices return to those halcyon days before Trump’s saber-rattling is certain to be top of mind for consumers nationwide — and globally — if and when tensions ebb in the strait.

The economics of gasoline play a unique role for most households. That’s largely because gasoline demand is relatively inelastic, in economic parlance: It’s hard for many people to reduce their consumption when prices rise, because they still have to commute to their workplace and perform the same daily chores that require auto travel.

They can move down to a lower grade of fuel, but their options to do so are limited compared with the choices they can make at, say, the supermarket, where they can respond to a surge in the price of beef by choosing a cheaper cut or a cheaper protein.

Advertisement

That makes it useful to understand what drives gasoline prices higher or lower. Let’s take a look.

The academic bookshelf groans with the weight of studies of the phenomenon, but the seminal analysis of the topic remains a 1997 paper by economist Severin Borenstein of UC Berkeley and his colleagues.

They looked at how crude oil prices affected profit margins at several points in the crude-to-pump voyage of oil to gas, including crude supplies to the wholesale market and wholesale to retail. They found signs of rocket-and-feather price changes at all points, but for the layperson their general conclusion was this: It’s not your imagination.

“The evidence … supports the common belief that retail gasoline prices respond more quickly to increases in crude oil prices than to decreases,” Borenstein and his colleagues wrote in 1997.

The phenomenon is still “alive and well,” Borenstein told me Wednesday, adding that “much of this is a retail pricing phenomenon,” meaning that much of the explanation can be found at your corner gas station.

Advertisement

It can also be found in consumer behavior. Specifically, the inclination of consumers to search for lower prices during a spike. When prices are going up, consumers may see a high price at a particular gas station and think it’s an outlier, so they look for alternatives — even if all stations are raising prices. “They think they’ve found a bad deal, when in reality all prices are high,” says economist Matthew S. Lewis of Clemson University, who studies consumer search behavior.

When prices are falling, Lewis told me, consumers lose their incentive to search because they find prices that are similar to what they’ve expected. “Once everyone’s lowered their prices a little bit, that takes away their incentive to lower them further because no one is looking around for lower prices” and further reductions won’t win gas stations any new customers. “Everyone’s happy at the first station they stop at,” Lewis says.

Retailer profit margins are chronically slim — and during rapid crude price increases even negative — giving them an incentive to raise prices quickly as the cost of crude and of refined gas mounts — and to try to hold the higher prices steady to recover their margins as their other costs call.

It’s also true that consumers become more sensitive to higher prices because press coverage makes the price hikes inescapable, and less so as prices fall, even if they don’t fully return to earlier levels. Just now, as it happens, the price of gasoline receives front-page coverage and is flashed almost minute by minute on cable news shows.

Lewis points out, however, that “there’s a strong asymmetric pattern in press coverage too. As prices are going up, that’s talked about a lot, and as prices start to fall the coverage goes down and down, and people’s attention does too.”

Advertisement

That brings us to the factors affecting the price of gasoline. The cost of crude oil is known as the spot price — the price quoted by traders on the open market. By the time the oil reaches consumers as gasoline at the pump, it has changed hands several times — at refineries, regional terminals and local distributors.

The analysis by Borenstein and his colleagues found most of those markets to be reasonably competitive — that is, their prices adjusted quickly to changes in crude prices. But asymmetry — prices rising fast but falling slowly — increased as the refined product made its way to city distribution terminals and subsequently to retail stations. It’s the latter that have the most incentive to raise prices quickly and to stick with them the longest.

“Asymmetry in price adjustment is a retail thing,” Lewis says, “which is what you’d expect if the source is consumer search rather than collusion.”

It can be difficult to pinpoint the factors reflected in retail gas prices because they differ among regions. After Hurricanes Katrina and Rita laid waste to drilling, transport and refining facilities around the Gulf of Mexico coast in 2005, gas prices soared in the South, Midwest and along the East Coast, which depended heavily on crude and refined gas produced in or near the gulf. That resulted in gas prices jumping by nearly 60 cents per gallon, according to research by Lewis.

But the pace at which the increases ebbed differed within that market, in part because its retail structures differed among states and cities. In those with high concentrations of independent gas stations — those unaffiliated with branded refineries — prices fell relatively faster.

Advertisement

The reason, Lewis found, was that those communities experienced “cyclical pricing,” in which gas station owners had a habit of changing their prices frequently as a competitive device, often moving the price of gas day by day. Strategic pricing tended to make high prices relatively less sticky.

California is another unique market. The state’s limited refinery capacity makes it more vulnerable to crude price shocks, and its mandate for anti-smog gas formulations in the summer also constrains gas supplies, pushing prices higher. California’s gas taxes are higher than the national average, contributing to its nation-leading prices at the pump.

Then there’s what Borenstein has identified as the state’s “mystery gasoline surcharge,” an unexplained differential in price that originated after a 2015 fire at a Torrance refinery then owned by Exxon Mobil, but persists without explanation more than a decade later and is currently estimated at more than 50 cents per gallon.

What’s indisputable is that consumers are paying for the Iran war at the pump, and they’ll continue to do so for weeks, even months, after the conflict is resolved and the Strait of Hormuz is opened again to all traffic. Economists observe, furthermore, that large price spikes at the pump take longer to return to equilibrium than small ones, in part because retailers can keep prices high until they see evidence that they’re losing customers.

In other words, it’s reasonable to feel relief once crude oil prices retrace their journey back to where they were before the Iran war began. Just don’t expect to feel relief at the pump any time soon.

Advertisement
Continue Reading
Advertisement

Trending