Health
Cyberattack on UnitedHealth Leaves Medical Providers in Debt
Two independent medical practices in Minnesota once hoped to expand operations but have spent the past year struggling to recover from the cyberattack on a vast UnitedHealth Group payment system.
Odom Health & Wellness, a sports medicine and rehabilitation outfit, and the Dillman Clinic & Lab, a family medicine practice, are among the thousands of medical offices that experienced sudden financial turmoil last year. The cyberattack against Change Healthcare, a division of United, paralyzed much of the nation’s health-care payment system for months.
Change lent billions of dollars to medical practices that were short on cash but has begun demanding repayments.
Dillman and Odom are suing United in U.S. District Court in Minneapolis, accusing the corporation of negligence related to the cyberattack and claiming they sustained excessive expenses because of the attack’s fallout.
In addition, Odom and Dillman asserted in court filings that the company’s insurance arm, UnitedHealthcare, has in turn been denying claims to cover patient care for being submitted late.
Lawmakers viewed the chaos caused by the cyberattack as a result of United’s seemingly insatiable desire to buy up companies like Change, alongside doctors’ practices and pharmacy businesses. The widespread disruption was a reminder of how deeply United’s sprawling subsidiaries had become embedded in the nation’s health care system.
“This is yet another sign that the rapid consolidation of major health care companies has harmed, rather than helped, American patients and doctors,” Senator Ron Wyden, Democrat of Oregon, said of the financial bind that the cyberattack had placed on practices.
Last month, the American Medical Association sent a letter to Optum, the UnitedHealth division that owns Change, saying that it was concerned that many practices were being pressured to repay loans despite continued financial difficulties from the cyberattack.
Since March 2024, Change had provided $9 billion in interest-free loans to more than 10,000 medical providers, including $569,680 to Odom and $157,600 to Dillman.
A year later, roughly $5.5 billion had been repaid, United said in court filings. About 3,500 practices, including Odom, Dillman and six other plaintiffs in the lawsuits, had made no repayments as of April 1. Several other practices and patients have also filed suits against United.
In a statement, Change said it would “continue to actively work with providers to identify flexible repayment plans based on the individual circumstances of providers and their practices.”
It added, “We have also worked with UnitedHealthcare to ensure the claims it receives are reviewed in light of the challenges providers experienced, including waiving timely filing requirements for the plans under its control.”
Change compared its efforts to recoup loans to those by the Centers for Medicare and Medicaid Services. After the cyberattack, C.M.S. provided accelerated payments to practices to cover Medicare billings delayed by the cyberattack. It has since garnished Medicare claims to recoup the funds.
In court filings, United cited data showing that only a small percentage of Odom’s and Dillman’s health care claims were rejected for being “untimely,” although those denials increased after the cyberattack.
Calling the plaintiffs’ motions a “collective shakedown,” UnitedHealth has also requested that the district court reject their request for an injunction against repayment of loans, arguing that they did not have the right to interfere in its business with thousands of other loan recipients.
An injunction, United argued, could be used by other medical practices to “hold hostage billions of dollars.”
Dr. Megan Dillman, who specializes in pediatrics and internal medicine, said she had opened her Lakeville, Minn., practice in 2022 to “bring the joy back to medicine.” She said she spent far more time with patients than the spartan 15 minutes that corporate health care operations have increasingly required of their doctors.
“I have some patients where I don’t think they would be here today if we didn’t exist,” Dr. Dillman said, citing cancers she had detected that had been missed by more hurried doctors.
Her husband, Richard Dillman, runs the business side of the practice. He called United’s repayment demands “a kick in the teeth.”
“I’d rather go through the Special Forces qualification course back to back — to back to back — than ever do this again,” said Mr. Dillman, a former Green Beret.
At the time of the cyberattack, Change’s medical-billing clearinghouse processed about 45 percent of the nation’s health care transactions, or about $2 trillion annually. The company had to take its services offline in February 2024 to contain damage from the attack, halting much of the health care system’s cash flow and unleashing chaos.
The associated breach of private information was the largest reported in U.S. health-care history. In January, United increased the reported number of people whose personal data had been exposed to 190 million from 100 million.
The U.S. Department of Health and Human Services’s Office of Civil Rights opened an investigation into the ransomware attack in March 2024. An agency spokesperson stated that it “does not generally comment on current or open investigations.” Some health care companies have been fined for breaches involving patient data.
Company officials have said that the hackers infiltrated Change’s systems by obtaining compromised login credentials and using a portal for entry that did not require multifactor authentication.
United officials confirmed that the company had paid a $22 million ransom to the Russian cybercriminals who claimed responsibility. The corporation reported in a January earnings report that the cyberattack had by then cost $3.1 billion.
Health care reimbursements didn’t begin to channel relatively freely through Change until June 2024, although United said that some of its systems had taken longer to come back online and that a few were still not at 100 percent.
At congressional hearings in May 2024, senators slammed Andrew Witty, United’s chief executive, for how the company had handled the cyberattack and the disruption it caused thousands of providers. Mr. Witty testified that the company had “no intention of asking for repayment until providers determine their business is back to normal.”
The loan terms stipulated that Change would not demand repayment until “after claims processing and/or payment processing services and payments impacted during the service disruption period are being processed.”
The meaning of “being processed” is now at the center of the court cases.
Change began seeking repayment from Dillman and Odom through what the medical practices characterized in court filings as a succession of increasingly aggressive letters. Both practices told Change they were unable to repay and neither accepted repayment plan offers. Change then in January demanded full repayment and threatened to withhold future reimbursements for patients’ health care.
“It’s disappointing but not surprising that UnitedHealth Group has decided to prioritize its bottom line over the well-being of families and small businesses,” said Mr. Wyden, who led the Senate hearing on the cyberattack.
The A.M.A. called upon the company to negotiate “an individualized, realistic repayment plan” with each practice.
Dr. Catherine Mazzola, who runs a pediatric neurology and neurosurgery practice in New Jersey, is among many others who have also battled with United over the loans.
“Optum, in my opinion, is acting like a loan shark trying to rapidly collect,” Dr. Mazzola, who is not a plaintiff in the lawsuits against United, said of the division that owns Change.
Dr. Mazzola received a $535,000 loan, and she said she had later told Change she could not repay it. She proposed a schedule but received no response. So she began paying $10,000 a month in January. But without any warning, she said, United began garnishing her reimbursements.
A United spokesman disputed her account, saying demand for full repayment would not occur without warning but after months of efforts to negotiate a plan.
Today, Dr. Odom employs about 110 people, many of whom provide rehab to older people in assisted-living facilities. If his practice had to repay the Change loan immediately, his lawsuit asserted, he would have to lay off at least 22 staff members. Dr. Odom said that could prompt assisted-living chains to drop his services and cause more financial harm.
“We face an uphill battle as such a small company,” said Dr. Meghan Klein, Odom’s president. Speaking to the gulf between her company’s finances and United’s, she said: “What is little impact to them is huge impact to us. These are a lot of people’s lives that we’re worried about.”
The Dillman Clinic, which derives about one-quarter of its income from United insurance reimbursements, would face bankruptcy if forced to fully repay its loan, according to its lawsuit.
Having leveraged their house, their cars and their retirement accounts against their practice, the Dillmans would lose all of their assets to bankruptcy, including their home, they said.
“Part of the goal of being here is to have control over my schedule,” Dr. Dillman said. But the cyberattack-driven chaos has consumed the couple’s time, leaving little for their 6-year-old daughter.
“There are days I see her for an hour,” Dr. Dillman said. “I’m missing her childhood.”
Health
The Latest on Natural Ozempic Alternatives: How To Lose Weight Without GLP-1s
Use left and right arrow keys to navigate between menu items.
Use escape to exit the menu.
Sign Up
Create a free account to access exclusive content, play games, solve puzzles, test your pop-culture knowledge and receive special offers.
Already have an account? Login
Health
Punch the monkey, viral star, experiences dramatic breakthrough among zoo mates
NEWYou can now listen to Fox News articles!
In a dramatic turn of events that’s captured the attention of animal lovers worldwide, Punch — the young macaque at a zoo in Japan famous for his inseparable bond with a stuffed orangutan toy — has reached a major milestone in his journey toward social integration.
On Thursday, visitors and staff at the Ichikawa Zoological and Botanical Garden witnessed a breakthrough: Punch was seen cuddling with and hitching a ride on the back of a fellow macaque.
Punch’s story began with hardship. He was abandoned by his mother shortly after his birth in July 2025 — and to ensure his survival, zookeepers stepped in to hand-rear the primate.
On Jan. 19, 2026, the zoo officially began the process of reintegrating Punch into the “monkey mountain” enclosure.
The transition was initially fraught with tension.
Punch’s story began with hardship when he was abandoned by his mother shortly after he was born. To help him, zookeepers gave him a stuffed toy that he began dragging around everywhere he went. (David Mareuil/Anadolu via Getty Images)
As a hand-reared infant, Punch was bullied and ignored by the established group of monkeys.
He was often seen huddled alone with his orange plush companion while the rest of the troop interacted.
BABY MONKEY CARRIES FAITHFUL STUFFED COMPANION EVERYWHERE HE GOES, DRAWING CROWDS AT ZOO
In an official statement released Feb. 27, the Ichikawa Zoological and Botanical Garden detailed the meticulous care behind this process.
Previous viral videos showed Punch bullied by the rest of the troop, running to his plushy toy for comfort. (David Mareuil/Anadolu via Getty Images)
“From an animal welfare perspective, our primary goal is to reintegrate Punch with the troop,” the zoo said.
CLICK HERE FOR MORE LIFESTYLE STORIES
The strategy involved nursing Punch within the enclosure, so the troop could recognize him as one of their own, and pairing him with a gentle young female macaque prior to his full release to build his confidence.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
The latest footage, captured by X user @tate_gf, suggested the zoo’s patience is paying off.
The video shows Punch seeking physical contact not from his toy, but from another monkey — eventually climbing onto its back for a vital social behavior for young macaques: the “piggyback ride.”
The zoo’s strategy appears to be paying off: Punch, shown at far left, was recently seen riding on the back of a fellow macaque. (David Mareuil/Anadolu via Getty Images)
While Punch still carries his stuffed toy for comfort during moments of perceived danger, the zoo remains optimistic about his progress.
The organization cited the successful 2009 case of Otome, another hand-reared macaque who eventually outgrew her stuffed toy, successfully integrated — and went on to raise four offspring of her own.
The zoo has had crowds coming to see Punch, with hundreds of people lining up to get inside to see the young star, according to reports.
TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ
“I’m hoping Punch has a good life like everybody else does, and think he’s a cute little guy,” one person commented online.
“Such a precious baby,” another person wrote.
Health
ChatGPT could miss your serious medical emergency, new study suggests
NEWYou can now listen to Fox News articles!
This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).
Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice.
In January, OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool.
The company introduced the tool as “a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared and confident navigating your health.”
But researchers at the Icahn School of Medicine at Mount Sinai have found that the tool failed to recommend emergency care for a “significant number” of serious medical cases.
The study, published in the journal Nature Medicine on Feb. 23, aimed to explore how ChatGPT Health — which is reported to have about 40 million users daily — handles situations where people are asking whether to seek emergency care.
Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice. (iStock)
“Right now, no independent body evaluates these products before they reach the public,” lead author Ashwin Ramaswamy, M.D., instructor of urology at the Icahn School of Medicine at Mount Sinai in New York City, told Fox News Digital.
“We wouldn’t accept that for a medication or a medical device, and we shouldn’t accept it for a product that tens of millions of people are using to make health decisions.”
Emergency scenarios
The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies.
Three independent physicians then assigned an appropriate level of urgency for each case, based on published clinical practice guidelines in 56 medical societies.
WOMAN SAYS CHATGPT SAVED HER LIFE BY HELPING DETECT CANCER, WHICH DOCTORS MISSED
The researchers conducted 960 interactions with ChatGPT Health to see how the tool responded, taking into account gender, race, barriers to care and “social dynamics.”
While “clear-cut emergencies” — such as stroke or severe allergy — were generally handled well, the researchers found that the tool “under-triaged” many urgent medical issues.
The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies. (iStock)
For example, in one asthma scenario, the system acknowledged that the patient was showing early signs of respiratory failure — but still recommended waiting instead of seeking emergency care.
“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum — the cases where getting it right matters most,” Ramaswamy told Fox News Digital. “It under-triaged over half of genuine emergencies and over-triaged roughly two-thirds of mild cases that clinical guidelines say should be managed at home.”
PARENTS FILE LAWSUIT ALLEGING CHATGPT HELPED THEIR TEENAGE SON PLAN SUICIDE
Under-triage can be life-threatening, the doctor noted, while over-triage can overwhelm emergency departments and delay care for those in real need.
Researchers also identified inconsistencies in suicide risk alerts. In some cases, it directed users to the 988 Suicide and Crisis Lifeline in lower-risk scenarios, and in others, it failed to offer that recommendation even when a person discussed suicidal ideations.
“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum.”
“The suicide guardrail failure was the most alarming,” study co-author Girish N. Nadkarni, M.D., chief AI officer of the Mount Sinai Health System, told Fox News Digital.
ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm, the researcher noted.
OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool, in January 2026. (Gabby Jones/Bloomberg via Getty Images)
“We tested it with a 27-year-old patient who said he’d been thinking about taking a lot of pills,” Nadkarni said. “When he described his symptoms alone, the banner appeared 100% of the time. Then we added normal lab results — same patient, same words, same severity — and the banner vanished.”
“A safety feature that works perfectly in one context and completely fails in a nearly identical context … is a fundamental safety problem.”
CHATGPT HEALTH PROMISES PRIVACY FOR HEALTH CONVERSATIONS
The researchers were also surprised by the social influence aspect.
“When a family member in the scenario said ‘it’s nothing serious’ — which happens all the time in real life — the system became nearly 12 times more likely to downplay the patient’s symptoms,” Nadkarni said. “Everyone has a spouse or parent who tells them they’re overreacting. The AI shouldn’t be agreeing with them during a potential emergency.”
Fox News Digital reached out to Open AI, creator of ChatGPT, requesting comment.
Physicians react
Dr. Marc Siegel, Fox News senior medical analyst, called the new study “important.”
“It underlines the principle that while large language models can triage clear-cut emergencies, they have much more trouble with nuanced situations,” Siegel, who was not involved in the study, told Fox News Digital.
ChatGPT and other LLMs can be helpful tools, a doctor said, but they “should not be used to give medical direction.” (iStock)
“This is where doctors and clinical judgment come in — knowing the nuances of a patient’s history and how they report symptoms and their approach to health.”
ChatGPT and other LLMs can be helpful tools, Siegel said, but they “should not be used to give medical direction.”
“Machine learning and continued input of data can help, but will never compensate for the essential problem – human judgment is needed to decide whether something is a true emergency or not.”
BREAKTHROUGH BLOOD TEST COULD SPOT DOZENS OF CANCERS BEFORE SYMPTOMS APPEAR
Dr. Harvey Castro, an emergency physician and AI expert in Texas, echoed the importance of the study, calling it “exactly the kind of independent safety evaluation we need.”
“Innovation moves fast. Oversight has to move just as fast,” Castro, who also did not work on the study, told Fox News Digital. “In healthcare, the most dangerous mistakes happen at the extremes, when something looks mild but is actually catastrophic. That’s where clinical judgment matters most, and where AI must be stress-tested.”
Study limitations
The researchers acknowledged some potential limitations in the study design.
“We used physician-written clinical scenarios rather than real patient conversations, and we tested at a single point in time — these systems update frequently, so performance may change,” Ramaswamy told Fox News Digital.
CLICK HERE FOR MORE HEALTH STORIES
Additionally, most of the missed emergencies happened in situations where the danger depended on how the condition was changing over time. It’s not clear whether the same problem would happen with acute medical emergencies.
Because the system had to choose just one fixed urgency category, the test may not reflect the more nuanced advice it might give in a back-and-forth conversation, the researchers noted.
ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm. (iStock)
Also, the study wasn’t large enough to confidently detect small differences in how recommendations might vary by race or gender.
“We need continuous auditing, not one-time studies,” Castro noted. “These systems update frequently, so evaluation must be ongoing.”
‘Don’t wait’
The researchers emphasized the importance of seeking immediate care for serious issues.
CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER
“If something feels seriously wrong — chest pain, difficulty breathing, a severe allergic reaction, thoughts of self-harm — go to the emergency department or call 988,” Ramaswamy advised. “Don’t wait for an AI to tell you it’s OK.”
The researchers noted that they support the use of AI to improve healthcare access, and that they didn’t conduct the study to “tear down the technology.”
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
“These tools can be genuinely useful for the right things — understanding a diagnosis you’ve already received, looking up what your medications do and their side effects, or getting answers to questions that didn’t get fully addressed in a short doctor’s visit,” Ramaswamy said.
“That’s a very different use case from deciding whether you need emergency care. Treat them as a complement to your doctor, not a replacement.”
“This study doesn’t mean we abandon AI in healthcare.”
Castro agreed that the benefits of AI health tools should be weighed against the risks.
“AI health tools can increase access, reduce unnecessary visits and empower patients with information,” he said. “They are not inherently unsafe, but they are not yet substitutes for clinical judgment.”
TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ
“This study doesn’t mean we abandon AI in healthcare,” he went on. “It means we mature it. Independent testing and stronger guardrails will determine whether AI becomes a safety net or a liability.”
-
World5 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts6 days agoMother and daughter injured in Taunton house explosion
-
Denver, CO6 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Louisiana1 week agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Oregon4 days ago2026 OSAA Oregon Wrestling State Championship Results And Brackets – FloWrestling
-
Technology1 week agoArturia’s FX Collection 6 adds two new effects and a $99 intro version
-
News1 week agoVideo: How Lunar New Year Traditions Take Root Across America
-
Florida2 days agoFlorida man rescued after being stuck in shoulder-deep mud for days