Connect with us

Health

Firings at FDA Decimated Teams Reviewing AI and Food Safety

Published

on

Firings at FDA Decimated Teams Reviewing AI and Food Safety

In recent years, the Food and Drug Administration hired experts in surgical robots and pioneers in artificial intelligence. It scooped up food chemists, lab-safety monitors and diabetes specialists who helped make needle pricks and test strips relics of the past.

Trying to keep up with breakneck advances in medical technology and the demands of a public troubled by additives like food dyes, the agency enticed scores of midcareer specialists with remote roles and the chance to make a difference in their fields.

In one weekend of mass firings across the F.D.A., much of that effort was gone. Most baffling to many were the firings of hundreds whose jobs were not funded by taxpayers. Their positions were financed through congressionally approved agreements that routed fees from the drug, medical device and tobacco industries to the agency.

Known as user fees, the money provides adequate staffing for reviews of myriad products. While criticized by some, including the nation’s new health secretary, Robert F. Kennedy Jr., as a corrupting force on the agency, the industry funds are also widely viewed as indispensable: They now account for nearly half of the agency’s $7.2 billion budget.

Though the F.D.A. is believed to have lost about 700 of its 18,000 employees, some cuts hit small teams so deeply that staff members believe the safety of some medical devices could be compromised.

Advertisement

Among the layoffs were scientists supported by the fees who monitor whether tests pick up ever-evolving pathogens, including those that cause bird flu and Covid. They hobbled teams that evaluate the safety of medical devices like surgical staplers, new systems for diabetes control and A.I. software programs that scan millions of M.R.I.s and other images to detect cancer beyond the human eye. The cuts also eliminated positions for employees who have played a role in assessing the brain-implant technology in Elon Musk’s Neuralink devices.

The layoffs affected so many key experts that a major medical device trade group has requested that the Trump administration reconsider the job cuts.

The dismissals also included lawyers who warned retailers about underage tobacco sales and scientists who studied the safety of e-cigarettes and new heat-not-burn devices. The tobacco division — which is fully funded by an excise tax on cigarettes — lost about 85 staff members.

Dr. Robert Califf, the F.D.A. commissioner under President Biden, said the personnel cutbacks seemed scattershot. Taking a not-so-subtle aim at Mr. Musk’s Department of Government Efficiency, which is reducing the federal work force, Dr. Califf said the layoffs were, in effect, “anti-efficiency.”

“These are not hires that are done arbitrarily,” he said. “They’re done to meet a need.”

Advertisement

A lawsuit challenging the firings filed by unions, including one that represents some F.D.A. employees, failed to stop the layoffs in a ruling issued Thursday. Other cutbacks reduced the 2,000-member staff of the F.D.A.’s food division, which is supported by tax dollars.

Jim Jones, the former director of the division who resigned on Monday over the cuts, said that he had briefed the Trump transition team on his efforts to create a new office that would review a premier target of Mr. Kennedy and his agenda to Make America Healthy Again: food additives that are already on the market.

Nine people from that food-chemical-safety staff of 30 are gone, including specialized toxicologists and chemists, Mr. Jones said in an interview.

“They’ve created a real pickle for themselves,” by cutting staff members working on a key priority, Mr. Jones said. “You just can’t do an assessment for free and you can’t ban chemicals by fiat.”

In interviews with 15 current and former agency staff members, they said those who were laid off had been probationary employees, a group that included agency veterans who took on new roles, were recently promoted or were hired in the last two years.

Advertisement

Those who remained said that they had been scrambling to pick up pressing medical device reviews and move forward with studies to bulletproof methods for detecting deadly bacteria during inspections at food production sites.

Divisions that review novel medications, vaccines and gene therapies were largely spared. Officials with the F.D.A.’s parent agency, the Department of Health and Human Services, did not respond to requests for comment.

The F.D.A. employees fired last weekend were notified in uniformly worded emails that their skills were not needed and that their performance was “not adequate to justify further employment by the agency.” Yet many of them said that their performance reviews had said they exceeded expectations.

Tony Maiorana, 37, a chemist, worked on product approval and safety in the fast-changing field of diabetes devices. In the last decade, the field has moved from painful needle pricks and test strips to systems that measure glucose levels just below the skin and automatically infuse the needed insulin.

The work of reviewing new products is painstaking: Novel algorithms measure and dispense insulin; materials implanted in the body must evade rejection by the immune system; and millions of patients from toddlers to the elderly are at risk if devices malfunction.

Advertisement

Still, about half of Dr. Maiorana’s product-review team was eliminated, he said.

“If you’re a patient and you complain, we are the ones that field your complaints,” he said. “We are the ones that monitor the death reports. We’re the ones that are telling companies: ‘Hey, there’s a big pattern of error happening here. People are dying or ending up in the hospital because of your device’ and ‘What has changed? What happened?’”

Dr. Maiorana said that he had expected his government job would be “chill,” but it turned out to be intense. His team had to assess whether studies of new devices that had never been used in humans were safe for adults and children. They also had to watch online marketplaces for diabetes technology that had not been approved by the agency.

“This is the reason the F.D.A. was founded — to protect the public,” Dr. Maiorana said.

Albert Yee, 59, an expert in biomechanics and robotics, was fired on Saturday. In his unit, four of 11 staff members, who review the safety of surgical robots, were let go.

Advertisement

Robotic surgery is increasingly employed in operating rooms across the country, used in cardiothoracic, gynecological and bariatric surgeries. Dr. Yee had worked in the industry and in academia before joining the F.D.A.

He said his team was highly specialized, including an expert with a doctorate in medical robotics and a physician who had conducted robotic operations.

He said that robotic devices had become so complex that the team’s diverse expertise was critical to evaluate not just the safety of such tools but also concerns about cybersecurity.

“All of these devices now — if they’re attached to the hospital network, they become an avenue to get into the hospital network or get into the device itself,” Dr. Yee said.

He said the team also fielded a flood of applications for surgical apparatus developed abroad that were similar to those made by companies based in the United States. He said the applications required close attention to catch problems that could endanger patients.

Advertisement

“The institutional knowledge we’re losing is just horrific,” he said. “I am concerned about public safety with this type of purge.”

Nathan Weidenhamer was a lead reviewer of cardiovascular devices and other high-risk implants.

He said he was shocked and disappointed to be laid off because he and other reviewers in the device division were partly funded by industry-generated fees.

“I naïvely thought we were important, critical public servants and I’d be spared,” he said.

The layoffs clearly did not skip over employee slots created and funded by the agreements negotiated with the industries, congressional lawmakers and F.D.A. officials. The industries provide billions of dollars in return for staff equipped to meet strict deadlines for decisions on product approvals — though not all go in companies’ favor. The money is also used to make the F.D.A. a competitive employer in specialized fields that require advanced degrees.

Advertisement

Some of the deadlines are viewed by F.D.A. staff members as demanding, particularly the 30-day clock requiring them to authorize or add comments to studies of devices that are being implanted in humans for the first time. If the agency does not respond within that time-frame, the study is given a green light under the law.

The depth of cuts to medical device staff prompted AdvaMed, a trade association for the industry, to push back in a letter to a top Health and Human Services official.

The letter detailed about 180 medical device staff cuts, which included 25 experts in artificial intelligence, a 20 percent reduction in biostatisticians who evaluated studies of novel devices and the loss of molecular biologists with expertise in diagnostic tests that pinpoint a cancer subtype. The firings also applied to a top official who was recently recruited to oversee about 10,000 product applications and meeting requests per year.

The group said it appreciated the Trump administration’s efforts to improve efficiency. But “they may have missed the mark on how they rolled it out,” Scott Whitaker, the president of AdvaMed, said in an interview.

Medical device companies benefit when the F.D.A. is well staffed with people who have the expertise to guide the safe development of new technology, he added.

Advertisement

“One that is slow and overregulates is not good,” he said. “One that is under-resourced and doesn’t regulate at all — that’s not good either.”

Alice Callahan contributed reporting.

Health

The Latest on Natural Ozempic Alternatives: How To Lose Weight Without GLP-1s

Published

on

The Latest on Natural Ozempic Alternatives: How To Lose Weight Without GLP-1s


Advertisement




Natural Ozempic Alternatives That Boost GLP-1 for Weight Loss | Woman’s World




















Advertisement





Advertisement


Use left and right arrow keys to navigate between menu items.


Use escape to exit the menu.

Advertisement

Continue Reading

Health

Punch the monkey, viral star, experiences dramatic breakthrough among zoo mates

Published

on

Punch the monkey, viral star, experiences dramatic breakthrough among zoo mates

NEWYou can now listen to Fox News articles!

In a dramatic turn of events that’s captured the attention of animal lovers worldwide, Punch — the young macaque at a zoo in Japan famous for his inseparable bond with a stuffed orangutan toy — has reached a major milestone in his journey toward social integration.

On Thursday, visitors and staff at the Ichikawa Zoological and Botanical Garden witnessed a breakthrough: Punch was seen cuddling with and hitching a ride on the back of a fellow macaque.

Punch’s story began with hardship. He was abandoned by his mother shortly after his birth in July 2025 — and to ensure his survival, zookeepers stepped in to hand-rear the primate.

On Jan. 19, 2026, the zoo officially began the process of reintegrating Punch into the “monkey mountain” enclosure.

Advertisement

The transition was initially fraught with tension. 

Punch’s story began with hardship when he was abandoned by his mother shortly after he was born. To help him, zookeepers gave him a stuffed toy that he began dragging around everywhere he went.  (David Mareuil/Anadolu via Getty Images)

As a hand-reared infant, Punch was bullied and ignored by the established group of monkeys.

He was often seen huddled alone with his orange plush companion while the rest of the troop interacted.

BABY MONKEY CARRIES FAITHFUL STUFFED COMPANION EVERYWHERE HE GOES, DRAWING CROWDS AT ZOO

Advertisement

In an official statement released Feb. 27, the Ichikawa Zoological and Botanical Garden detailed the meticulous care behind this process.

Previous viral videos showed Punch bullied by the rest of the troop, running to his plushy toy for comfort. (David Mareuil/Anadolu via Getty Images)

“From an animal welfare perspective, our primary goal is to reintegrate Punch with the troop,” the zoo said. 

CLICK HERE FOR MORE LIFESTYLE STORIES

The strategy involved nursing Punch within the enclosure, so the troop could recognize him as one of their own, and pairing him with a gentle young female macaque prior to his full release to build his confidence.

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

The latest footage, captured by X user @tate_gf, suggested the zoo’s patience is paying off. 

The video shows Punch seeking physical contact not from his toy, but from another monkey — eventually climbing onto its back for a vital social behavior for young macaques: the “piggyback ride.”

The zoo’s strategy appears to be paying off: Punch, shown at far left, was recently seen riding on the back of a fellow macaque. (David Mareuil/Anadolu via Getty Images)

While Punch still carries his stuffed toy for comfort during moments of perceived danger, the zoo remains optimistic about his progress. 

Advertisement

The organization cited the successful 2009 case of Otome, another hand-reared macaque who eventually outgrew her stuffed toy, successfully integrated — and went on to raise four offspring of her own.

The zoo has had crowds coming to see Punch, with hundreds of people lining up to get inside to see the young star, according to reports. 

TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ

“I’m hoping Punch has a good life like everybody else does, and think he’s a cute little guy,” one person commented online. 

Advertisement

“Such a precious baby,” another person wrote. 

Related Article

Orphaned baby monkey finds comfort in stuffed animal after being abandoned by mother at birth
Continue Reading

Health

ChatGPT could miss your serious medical emergency, new study suggests

Published

on

ChatGPT could miss your serious medical emergency, new study suggests

NEWYou can now listen to Fox News articles!

This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).

Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice.

In January, OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool. 

The company introduced the tool as “a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared and confident navigating your health.”

Advertisement

But researchers at the Icahn School of Medicine at Mount Sinai have found that the tool failed to recommend emergency care for a “significant number” of serious medical cases.

The study, published in the journal Nature Medicine on Feb. 23, aimed to explore how ChatGPT Health — which is reported to have about 40 million users daily — handles situations where people are asking whether to seek emergency care.

Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice. (iStock)

“Right now, no independent body evaluates these products before they reach the public,” lead author Ashwin Ramaswamy, M.D., instructor of urology at the Icahn School of Medicine at Mount Sinai in New York City, told Fox News Digital.

“We wouldn’t accept that for a medication or a medical device, and we shouldn’t accept it for a product that tens of millions of people are using to make health decisions.”

Advertisement

Emergency scenarios

The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies.

Three independent physicians then assigned an appropriate level of urgency for each case, based on published clinical practice guidelines in 56 medical societies.

WOMAN SAYS CHATGPT SAVED HER LIFE BY HELPING DETECT CANCER, WHICH DOCTORS MISSED

The researchers conducted 960 interactions with ChatGPT Health to see how the tool responded, taking into account gender, race, barriers to care and “social dynamics.”

While “clear-cut emergencies” — such as stroke or severe allergy — were generally handled well, the researchers found that the tool “under-triaged” many urgent medical issues.  

Advertisement

The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies. (iStock)

For example, in one asthma scenario, the system acknowledged that the patient was showing early signs of respiratory failure — but still recommended waiting instead of seeking emergency care.

“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum — the cases where getting it right matters most,” Ramaswamy told Fox News Digital. “It under-triaged over half of genuine emergencies and over-triaged roughly two-thirds of mild cases that clinical guidelines say should be managed at home.”

PARENTS FILE LAWSUIT ALLEGING CHATGPT HELPED THEIR TEENAGE SON PLAN SUICIDE

Under-triage can be life-threatening, the doctor noted, while over-triage can overwhelm emergency departments and delay care for those in real need.

Advertisement

Researchers also identified inconsistencies in suicide risk alerts. In some cases, it directed users to the 988 Suicide and Crisis Lifeline in lower-risk scenarios, and in others, it failed to offer that recommendation even when a person discussed suicidal ideations.

“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum.”

“The suicide guardrail failure was the most alarming,” study co-author Girish N. Nadkarni, M.D., chief AI officer of the Mount Sinai Health System, told Fox News Digital.

ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm, the researcher noted.

OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool, in January 2026. (Gabby Jones/Bloomberg via Getty Images)

Advertisement

“We tested it with a 27-year-old patient who said he’d been thinking about taking a lot of pills,” Nadkarni said. “When he described his symptoms alone, the banner appeared 100% of the time. Then we added normal lab results — same patient, same words, same severity — and the banner vanished.” 

“A safety feature that works perfectly in one context and completely fails in a nearly identical context … is a fundamental safety problem.”

CHATGPT HEALTH PROMISES PRIVACY FOR HEALTH CONVERSATIONS

The researchers were also surprised by the social influence aspect.

“When a family member in the scenario said ‘it’s nothing serious’ — which happens all the time in real life — the system became nearly 12 times more likely to downplay the patient’s symptoms,” Nadkarni said. “Everyone has a spouse or parent who tells them they’re overreacting. The AI shouldn’t be agreeing with them during a potential emergency.”

Advertisement

Fox News Digital reached out to Open AI, creator of ChatGPT, requesting comment.

Physicians react

Dr. Marc Siegel, Fox News senior medical analyst, called the new study “important.” 

“It underlines the principle that while large language models can triage clear-cut emergencies, they have much more trouble with nuanced situations,” Siegel, who was not involved in the study, told Fox News Digital. 

ChatGPT and other LLMs can be helpful tools, a doctor said, but they “should not be used to give medical direction.” (iStock)

“This is where doctors and clinical judgment come in — knowing the nuances of a patient’s history and how they report symptoms and their approach to health.”

Advertisement

ChatGPT and other LLMs can be helpful tools, Siegel said, but they “should not be used to give medical direction.”

“Machine learning and continued input of data can help, but will never compensate for the essential problem – human judgment is needed to decide whether something is a true emergency or not.”

BREAKTHROUGH BLOOD TEST COULD SPOT DOZENS OF CANCERS BEFORE SYMPTOMS APPEAR

Dr. Harvey Castro, an emergency physician and AI expert in Texas, echoed the importance of the study, calling it “exactly the kind of independent safety evaluation we need.”

“Innovation moves fast. Oversight has to move just as fast,” Castro, who also did not work on the study, told Fox News Digital. “In healthcare, the most dangerous mistakes happen at the extremes, when something looks mild but is actually catastrophic. That’s where clinical judgment matters most, and where AI must be stress-tested.”

Advertisement

Study limitations

The researchers acknowledged some potential limitations in the study design.

“We used physician-written clinical scenarios rather than real patient conversations, and we tested at a single point in time — these systems update frequently, so performance may change,” Ramaswamy told Fox News Digital.

CLICK HERE FOR MORE HEALTH STORIES

Additionally, most of the missed emergencies happened in situations where the danger depended on how the condition was changing over time. It’s not clear whether the same problem would happen with acute medical emergencies.

Because the system had to choose just one fixed urgency category, the test may not reflect the more nuanced advice it might give in a back-and-forth conversation, the researchers noted. 

Advertisement

ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm. (iStock)

Also, the study wasn’t large enough to confidently detect small differences in how recommendations might vary by race or gender.

“We need continuous auditing, not one-time studies,” Castro noted. “These systems update frequently, so evaluation must be ongoing.”

‘Don’t wait’

The researchers emphasized the importance of seeking immediate care for serious issues.

CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER

Advertisement

“If something feels seriously wrong — chest pain, difficulty breathing, a severe allergic reaction, thoughts of self-harm — go to the emergency department or call 988,” Ramaswamy advised. “Don’t wait for an AI to tell you it’s OK.”

The researchers noted that they support the use of AI to improve healthcare access, and that they didn’t conduct the study to “tear down the technology.”

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

“These tools can be genuinely useful for the right things — understanding a diagnosis you’ve already received, looking up what your medications do and their side effects, or getting answers to questions that didn’t get fully addressed in a short doctor’s visit,” Ramaswamy said. 

“That’s a very different use case from deciding whether you need emergency care. Treat them as a complement to your doctor, not a replacement.”

Advertisement

“This study doesn’t mean we abandon AI in healthcare.”

Castro agreed that the benefits of AI health tools should be weighed against the risks.

“AI health tools can increase access, reduce unnecessary visits and empower patients with information,” he said. “They are not inherently unsafe, but they are not yet substitutes for clinical judgment.”

TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ

“This study doesn’t mean we abandon AI in healthcare,” he went on. “It means we mature it. Independent testing and stronger guardrails will determine whether AI becomes a safety net or a liability.”

Advertisement

Related Article

ChatGPT dietary advice sends man to hospital with dangerous chemical poisoning
Continue Reading
Advertisement

Trending