Business
Commentary: AI isn’t ready to be your doctor yet — but will it ever be?
As almost everybody knows, the AI gold rush is upon us. And in few fields is it happening as fast and furiously as in healthcare.
That points to an important corollary: Beware.
Artificial intelligence technology has helped radiologists identify anomalies in images that human users have missed. It has some evident benefits in relieving doctors of the back-office routines that consume hours better spent treating patients, such as filing insurance claims and scheduling appointments.
Eventually, a lot of this stuff is going to be great, but we’re not there yet.
— Eric Topol, Scripps Research
But it has also been accused of providing erroneous information to surgeons during operations that placed their patients at grave risk of injury, and fomenting panic among users who take its offhand responses as serious diagnoses.
The commercial direct-to-consumer applications being promoted by AI firms, such as OpenAI’s ChatGPT Health and Anthropic’s Claude for Healthcare — both of which were introduced in January — raise special concerns among medical professionals. That’s because they’ve been pitched to users who may not appreciate their tendency to output erroneous information errors and offer inappropriate advice.
“Eventually, a lot of this stuff is going to be great, but we’re not there yet,” says Eric Topol, a cardiologist associated with Scripps Research Institute in La Jolla.
“The fact that they’re putting these out without enough anchoring in safety and quality and consistency concerns me,” Topol says. “They need much tighter testing. The problem I have is that these efforts are largely stemming from commercial interests — there’s furious competition to be the first to come out with an app for patients, even if it’s not quite ready yet.”
That was the experience reported by Washington Post technology columnist Geoffrey A. Fowler, who provided ChatGPT with 10 years of health data compiled by his Apple Watch — and received a warning about his cardiac health so dire that it sent him to his cardiologist, who told him he was in the bloom of health.
Fowler also sought out Topol, who reviewed the data and found the Chatbot’s warning to be “baseless.” Anthropic’s chatbot also provided Fowler with a health grade that Topol deemed dubious.
“Claude is designed to help users understand and organize their health information, framing responses as general health information rather than medical advice,” an Anthropic spokesman told me by email. “It can provide clinical context—for example, explaining how a lab value compares to diagnostic thresholds—while clearly stating that formal diagnosis requires professional evaluation.”
OpenAI didn’t respond to my questions about the safety and reliability of its consumer app.
Topol, who has written extensively about advanced technology in medicine, is nothing like an AI skeptic. He calls himself an AI optimist, citing numerous studies showing that artificial intelligence can help doctors treat patients more effectively and even to improve their bedside manners.
But he cautions that “healthcare can’t tolerate significant errors. We have to minimize the errors, the hallucinations, the confabulations, the BS and the sycophancy” that AI technology commonly displays.
In medicine, as in many other fields, AI looks to have been oversold as a labor-saving technology. According to a study of AI-equipped stethoscopes provided to about 100 British medical groups published earlier this month in the Lancet, the British medical journal, the high-tech stethoscopes effectively identified some (but not all) indications of heart failure better than conventional stethoscopes. But 40% of the groups abandoned the new devices during the 12-month period of the study.
The main complaint was the “additional workflow burden” experienced by the users — an indication that whatever the virtues of the new technology, they didn’t outweigh the time and effort needed to use them.
Other studies have found that AI can augment physicians’ skills — when the doctors have learned to trust their AI tools and when they’re used in relatively uncomplicated, even generic, conditions.
The most notable benefits have been found in radiology; according to a Dutch study published last year, radiologists using AI to help interpret breast X-rays did as well in finding cancers as two radiologists working together. That suggested that judicious use of AI could free up time for one of the two radiologists. But in this case as in others, the AI helper didn’t do consistently well.
“AI misses some breast cancers that are recalled by human assessment,” a study author said, “but detects a similar number of breast cancers otherwise missed by the interpreting radiologists.”
AI’s incursion into healthcare even has become something of a cultural touchstone: In HBO’s up-to-the-minute emergency room series “The Pitt,” beleaguered ER doctors discover that an AI app pushed on them as a time-saving charting tool has “hallucinated” a history of appendicitis for a patient, endangering the patient’s treatment.
“Generative AI is not perfect,” the app’s sponsor responds. “We still need to proofread every chart it creates” — thus acknowledging, accurately, that AI can increase, not relieve, users’ workloads.
A future in which robots perform surgical operations or make accurate diagnoses remains the stuff of science fiction. In medicine, as elsewhere, AI technology has been shown to be useful to take over automatable tasks from humans, but not in situations requiring human ingenuity or creativity — or precision. And attempts to use AI-related algorithms to make healthcare judgments have been challenged in court.
In a class-action lawsuit filed in Minnesota federal court in 2023, five Medicare patients and survivors of three others allege that UnitedHealth Group, the nation’s largest medical insurer, relied on an AI algorithm to deny coverage for their care, “overriding their treating physicians’ determinations as to medically necessary care based on an AI model” with a 90% error rate.
The case is pending. In its defense, UnitedHealth has asserted that decisions on whether to approve or deny coverage remain entirely in the hands of physicians and other clinical professionals the company employs, and their decisions on coverage and care comply with Medicare standards.
The AI algorithm cited by the plaintiffs, UnitedHealth says, is not used “to deny care to members or to make adverse medical necessity coverage determinations,” but rather to help physicians and patients “anticipate and plan for future care needs.” The company didn’t address the plaintiffs’ assertion about the algorithm’s error rate.
“We shouldn’t be complacent about accepting errors” from AI tools, Topol told me. But it’s proper to wonder whether that message has been absorbed by promoters of AI health applications.
Disclaimers warning that AI responses “are not professionally vetted or a substitute for medical advice” have all but disappeared from AI platforms, according to a survey by researchers at Stanford and UC Berkeley.
The issue becomes more urgent as the language of chatbots becomes more sophisticated and fluent, inspiring unwarranted confidence in their conclusions, the researchers cautioned. “Users may misinterpret AI-generated content as expert guidance,” they wrote, “potentially resulting in delayed treatment, inappropriate self-care, or misplaced trust in non-validated information.”
Typically, state laws require that medical diagnoses and clinical decisions proceed from physical examinations by licensed doctors and after a full workup of a patient’s medical and family history. They don’t necessarily rule out doctors’ use of AI to help them develop diagnoses or treatment plans, but the doctors must remain in control.
The Food and Drug Administration exempts medical devices from government licensing if they’re “intended generally for patient education, and … not intended for use in the diagnosis of disease or other conditions. That may cover AI bots if they’re not issuing diagnoses.
But that may not help users who have willingly uploaded their medical histories and test results to AI bots, unaware of concerns, including whether their information will be kept private or used against them in insurance decisions. Gaps in their uploaded data my affect the advice they receive from bots. And because the bots know nothing except the content they’ve been fed, their healthcare outputs may reflect cultural biases in the basic data, such as ethnic disparities in disease incidence and treatment.
“If there’s a mistake with all your data, you could get into a pretty severe anxiety attack,” Topol says. “Patients should verify, not just trust” what they’ve heard from a bot.
Topol warns that the negative effect of misleading AI information may not only fall on patients, but on the AI field itself. “The public doesn’t really differentiate between individual bots,” he told me. “All we need are some horror stories” about misdiagnoses or dangerous advice, “and that whole area is tarred.”
In his view, that would limit the promise of technologies that could improve the effectiveness of medical practice in many ways. The remedy is for AI applications to be subjected to the same clinical standards applied to “a drug, a device, a diagnostic. We can’t lower the threshold because it’s something new, or different, with some broad appeal.”
Business
Trump orders federal agencies to stop using Anthropic’s AI after clash with Pentagon
President Trump on Friday directed federal agencies to stop using technology from San Francisco artificial intelligence company Anthropic, escalating a high-profile clash between the AI startup and the Pentagon over safety.
In a Friday post on the social media site Truth Social, Trump described the company as “radical left” and “woke.”
“We don’t need it, we don’t want it, and will not do business with them again!” Trump said.
The president’s harsh words mark a major escalation in the ongoing battle between some in the Trump administration and several technology companies over the use of artificial intelligence in defense tech.
Anthropic has been sparring with the Pentagon, which had threatened to end its $200-million contract with the company on Friday if it didn’t loosen restrictions on its AI model so it could be used for more military purposes. Anthropic had been asking for more guarantees that its tech wouldn’t be used for surveillance of Americans or autonomous weapons.
The tussle could hobble Anthropic’s business with the government. The Trump administration said the company was added to a sweeping national security blacklist, ordering federal agencies to immediately discontinue use of its products and barring any government contractors from maintaining ties with it.
Defense Secretary Pete Hegseth, who met with Anthropic’s Chief Executive Dario Amodei this week, criticized the tech company after Trump’s Truth Social post.
“Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” he wrote Friday on social media site X.
Anthropic didn’t immediately respond to a request for comment.
Anthropic announced a two-year agreement with the Department of Defense in July to “prototype frontier AI capabilities that advance U.S. national security.”
The company has an AI chatbot called Claude, but it also built a custom AI system for U.S. national security customers.
On Thursday, Amodei signaled the company wouldn’t cave to the Department of Defense’s demands to loosen safety restrictions on its AI models.
The government has emphasized in negotiations that it wants to use Anthropic’s technology only for legal purposes, and the safeguards Anthropic wants are already covered by the law.
Still, Amodei was worried about Washington’s commitment.
“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” he said in a blog post. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
Tech workers have backed Anthropic’s stance.
Unions and worker groups representing 700,000 employees at Amazon, Google and Microsoft said this week in a joint statement that they’re urging their employers to reject these demands as well if they have additional contracts with the Pentagon.
“Our employers are already complicit in providing their technologies to power mass atrocities and war crimes; capitulating to the Pentagon’s intimidation will only further implicate our labor in violence and repression,” the statement said.
Anthropic’s standoff with the U.S. government could benefit its competitors, such as Elon Musk’s xAI or OpenAI.
Sam Altman, chief executive of OpenAI, the company behind ChatGPT and one of Anthropic’s biggest competitors, told CNBC in an interview that he trusts Anthropic.
“I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” he said. “I’m not sure where this is going to go.”
Anthropic has distinguished itself from its rivals by touting its concern about AI safety.
The company, valued at roughly $380 billion, is legally required to balance making money with advancing the company’s public benefit of “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”
Developers, businesses, government agencies and other organizations use Anthropic’s tools. Its chatbot can generate code, write text and perform other tasks. Anthropic also offers an AI assistant for consumers and makes money from paid subscriptions as well as contracts. Unlike OpenAI, which is testing ads in ChatGPT, Anthropic has pledged not to show ads in its chatbot Claude.
The company has roughly 2,000 employees and has revenue equivalent to about $14 billion a year.
Business
Video: The Web of Companies Owned by Elon Musk
new video loaded: The Web of Companies Owned by Elon Musk

By Kirsten Grind, Melanie Bencosme, James Surdam and Sean Havey
February 27, 2026
Business
Commentary: How Trump helped foreign markets outperform U.S. stocks during his first year in office
Trump has crowed about the gains in the U.S. stock market during his term, but in 2025 investors saw more opportunity in the rest of the world.
If you’re a stock market investor you might be feeling pretty good about how your portfolio of U.S. equities fared in the first year of President Trump’s term.
All the major market indices seemed to be firing on all cylinders, with the Standard & Poor’s 500 index gaining 17.9% through the full year.
But if you’re the type of investor who looks for things to regret, pay no attention to the rest of the world’s stock markets. That’s because overseas markets did better than the U.S. market in 2025 — a lot better. The MSCI World ex-USA index — that is, all the stock markets except the U.S. — gained more than 32% last year, nearly double the percentage gains of U.S. markets.
That’s a major departure from recent trends. Since 2013, the MSCI US index had bested the non-U.S. index every year except 2017 and 2022, sometimes by a wide margin — in 2024, for instance, the U.S. index gained 24.6%, while non-U.S. markets gained only 4.7%.
The Trump trade is dead. Long live the anti-Trump trade.
— Katie Martin, Financial Times
Broken down into individual country markets (also by MSCI indices), in 2025 the U.S. ranked 21st out of 23 developed markets, with only New Zealand and Denmark doing worse. Leading the pack were Austria and Spain, with 86% gains, but superior records were turned in by Finland, Ireland and Hong Kong, with gains of 50% or more; and the Netherlands, Norway, Britain and Japan, with gains of 40% or more.
Investment analysts cite several factors to explain this trend. Judging by traditional metrics such as price/earnings multiples, the U.S. markets have been much more expensive than those in the rest of the world. Indeed, they’re historically expensive. The Standard & Poor’s 500 index traded in 2025 at about 23 times expected corporate earnings; the historical average is 18 times earnings.
Investment managers also have become nervous about the concentration of market gains within the U.S. technology sector, especially in companies associated with artificial intelligence R&D. Fears that AI is an investment bubble that could take down the S&P’s highest fliers have investors looking elsewhere for returns.
But one factor recurs in almost all the market analyses tracking relative performance by U.S. and non-U.S. markets: Donald Trump.
Investors started 2025 with optimism about Trump’s influence on trading opportunities, given his apparent commitment to deregulation and his braggadocio about America’s dominant position in the world and his determination to preserve, even increase it.
That hasn’t been the case for months.
”The Trump trade is dead. Long live the anti-Trump trade,” Katie Martin of the Financial Times wrote this week. “Wherever you look in financial markets, you see signs that global investors are going out of their way to avoid Donald Trump’s America.”
Two Trump policy initiatives are commonly cited by wary investment experts. One, of course, is Trump’s on-and-off tariffs, which have left investors with little ability to assess international trade flows. The Supreme Court’s invalidation of most Trump tariffs and the bellicosity of his response, which included the immediate imposition of new 10% tariffs across the board and the threat to increase them to 15%, have done nothing to settle investors’ nerves.
Then there’s Trump’s driving down the value of the dollar through his agitation for lower interest rates, among other policies. For overseas investors, a weaker dollar makes U.S. assets more expensive relative to the outside world.
It would be one thing if trade flows and the dollar’s value reflected economic conditions that investors could themselves parse in creating a picture of investment opportunities. That’s not the case just now. “The current uncertainty is entirely man-made (largely by one orange-hued man in particular) but could well continue at least until the US mid-term elections in November,” Sam Burns of Mill Street Research wrote on Dec. 29.
Trump hasn’t been shy about trumpeting U.S. stock market gains as emblems of his policy wisdom. “The stock market has set 53 all-time record highs since the election,” he said in his State of the Union address Tuesday. “Think of that, one year, boosting pensions, 401(k)s and retirement accounts for the millions and the millions of Americans.”
Trump asserted: “Since I took office, the typical 401(k) balance is up by at least $30,000. That’s a lot of money. … Because the stock market has done so well, setting all those records, your 401(k)s are way up.”
Trump’s figure doesn’t conform to findings by retirement professionals such as the 401(k) overseers at Bank of America. They reported that the average account balance grew by only about $13,000 in 2025. I asked the White House for the source of Trump’s claim, but haven’t heard back.
Interpreting stock market returns as snapshots of the economy is a mug’s game. Despite that, at her recent appearance before a House committee, Atty. Gen. Pam Bondi tried to deflect questions about her handling of the Jeffrey Epstein records by crowing about it.
“The Dow is over 50,000 right now, she declared. “Americans’ 401(k)s and retirement savings are booming. That’s what we should be talking about.”
I predicted that the administration would use the Dow industrial average’s break above 50,000 to assert that “the overall economy is firing on all cylinders, thanks to his policies.” The Dow reached that mark on Feb. 6. But Feb. 11, the day of Bondi’s testimony, was the last day the index closed above 50,000. On Thursday, it closed at 49,499.50, or about 1.4% below its Feb. 10 peak close of 50,188.14.
To use a metric suggested by economist Justin Wolfers of the University of Michigan, if you invested $48,488 in the Dow on the day Trump took office last year, when the Dow closed at 48,448 points, you would have had $50,000 on Feb. 6. That’s a gain of about 3.2%. But if you had invested the same amount in the global stock market not including the U.S. (based on the MSCI World ex-USA index), on that same day you would have had nearly $60,000. That’s a gain of nearly 24%.
Broader market indices tell essentially the same story. From Jan. 17, 2025, the last day before Trump’s inauguration, through Thursday’s close, the MSCI US stock index gained a cumulative 16.3%. But the world index minus the U.S. gained nearly 42%.
The gulf between U.S. and non-U.S. performance has continued into the current year. The S&P 500 has gained about 0.74% this year through Wednesday, while the MSCI World ex-USA index has gained about 8.9%. That’s “the best start for a calendar year for global stocks relative to the S&P 500 going back to at least 1996,” Morningstar reports.
It wouldn’t be unusual for the discrepancy between the U.S. and global markets to shrink or even reverse itself over the course of this year.
That’s what happened in 2017, when overseas markets as tracked by MSCI beat the U.S. by more than three percentage points, and 2022, when global markets lost money but U.S. markets underperformed the rest of the world by more than five percentage points.
Economic conditions change, and often the stock markets march to their own drummers. The one thing less likely to change is that Trump is set to remain president until Jan. 20, 2029. Make your investment bets accordingly.
-
World2 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts3 days agoMother and daughter injured in Taunton house explosion
-
Montana1 week ago2026 MHSA Montana Wrestling State Championship Brackets And Results – FloWrestling
-
Louisiana5 days agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Denver, CO2 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Technology7 days agoYouTube TV billing scam emails are hitting inboxes
-
Technology7 days agoStellantis is in a crisis of its own making
-
Politics7 days agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT