Health
ChatGPT could miss your serious medical emergency, new study suggests
NEWYou can now listen to Fox News articles!
This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).
Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice.
In January, OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool.
The company introduced the tool as “a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared and confident navigating your health.”
But researchers at the Icahn School of Medicine at Mount Sinai have found that the tool failed to recommend emergency care for a “significant number” of serious medical cases.
The study, published in the journal Nature Medicine on Feb. 23, aimed to explore how ChatGPT Health — which is reported to have about 40 million users daily — handles situations where people are asking whether to seek emergency care.
Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice. (iStock)
“Right now, no independent body evaluates these products before they reach the public,” lead author Ashwin Ramaswamy, M.D., instructor of urology at the Icahn School of Medicine at Mount Sinai in New York City, told Fox News Digital.
“We wouldn’t accept that for a medication or a medical device, and we shouldn’t accept it for a product that tens of millions of people are using to make health decisions.”
Emergency scenarios
The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies.
Three independent physicians then assigned an appropriate level of urgency for each case, based on published clinical practice guidelines in 56 medical societies.
WOMAN SAYS CHATGPT SAVED HER LIFE BY HELPING DETECT CANCER, WHICH DOCTORS MISSED
The researchers conducted 960 interactions with ChatGPT Health to see how the tool responded, taking into account gender, race, barriers to care and “social dynamics.”
While “clear-cut emergencies” — such as stroke or severe allergy — were generally handled well, the researchers found that the tool “under-triaged” many urgent medical issues.
The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies. (iStock)
For example, in one asthma scenario, the system acknowledged that the patient was showing early signs of respiratory failure — but still recommended waiting instead of seeking emergency care.
“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum — the cases where getting it right matters most,” Ramaswamy told Fox News Digital. “It under-triaged over half of genuine emergencies and over-triaged roughly two-thirds of mild cases that clinical guidelines say should be managed at home.”
PARENTS FILE LAWSUIT ALLEGING CHATGPT HELPED THEIR TEENAGE SON PLAN SUICIDE
Under-triage can be life-threatening, the doctor noted, while over-triage can overwhelm emergency departments and delay care for those in real need.
Researchers also identified inconsistencies in suicide risk alerts. In some cases, it directed users to the 988 Suicide and Crisis Lifeline in lower-risk scenarios, and in others, it failed to offer that recommendation even when a person discussed suicidal ideations.
“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum.”
“The suicide guardrail failure was the most alarming,” study co-author Girish N. Nadkarni, M.D., chief AI officer of the Mount Sinai Health System, told Fox News Digital.
ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm, the researcher noted.
OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool, in January 2026. (Gabby Jones/Bloomberg via Getty Images)
“We tested it with a 27-year-old patient who said he’d been thinking about taking a lot of pills,” Nadkarni said. “When he described his symptoms alone, the banner appeared 100% of the time. Then we added normal lab results — same patient, same words, same severity — and the banner vanished.”
“A safety feature that works perfectly in one context and completely fails in a nearly identical context … is a fundamental safety problem.”
CHATGPT HEALTH PROMISES PRIVACY FOR HEALTH CONVERSATIONS
The researchers were also surprised by the social influence aspect.
“When a family member in the scenario said ‘it’s nothing serious’ — which happens all the time in real life — the system became nearly 12 times more likely to downplay the patient’s symptoms,” Nadkarni said. “Everyone has a spouse or parent who tells them they’re overreacting. The AI shouldn’t be agreeing with them during a potential emergency.”
Fox News Digital reached out to Open AI, creator of ChatGPT, requesting comment.
Physicians react
Dr. Marc Siegel, Fox News senior medical analyst, called the new study “important.”
“It underlines the principle that while large language models can triage clear-cut emergencies, they have much more trouble with nuanced situations,” Siegel, who was not involved in the study, told Fox News Digital.
ChatGPT and other LLMs can be helpful tools, a doctor said, but they “should not be used to give medical direction.” (iStock)
“This is where doctors and clinical judgment come in — knowing the nuances of a patient’s history and how they report symptoms and their approach to health.”
ChatGPT and other LLMs can be helpful tools, Siegel said, but they “should not be used to give medical direction.”
“Machine learning and continued input of data can help, but will never compensate for the essential problem – human judgment is needed to decide whether something is a true emergency or not.”
BREAKTHROUGH BLOOD TEST COULD SPOT DOZENS OF CANCERS BEFORE SYMPTOMS APPEAR
Dr. Harvey Castro, an emergency physician and AI expert in Texas, echoed the importance of the study, calling it “exactly the kind of independent safety evaluation we need.”
“Innovation moves fast. Oversight has to move just as fast,” Castro, who also did not work on the study, told Fox News Digital. “In healthcare, the most dangerous mistakes happen at the extremes, when something looks mild but is actually catastrophic. That’s where clinical judgment matters most, and where AI must be stress-tested.”
Study limitations
The researchers acknowledged some potential limitations in the study design.
“We used physician-written clinical scenarios rather than real patient conversations, and we tested at a single point in time — these systems update frequently, so performance may change,” Ramaswamy told Fox News Digital.
CLICK HERE FOR MORE HEALTH STORIES
Additionally, most of the missed emergencies happened in situations where the danger depended on how the condition was changing over time. It’s not clear whether the same problem would happen with acute medical emergencies.
Because the system had to choose just one fixed urgency category, the test may not reflect the more nuanced advice it might give in a back-and-forth conversation, the researchers noted.
ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm. (iStock)
Also, the study wasn’t large enough to confidently detect small differences in how recommendations might vary by race or gender.
“We need continuous auditing, not one-time studies,” Castro noted. “These systems update frequently, so evaluation must be ongoing.”
‘Don’t wait’
The researchers emphasized the importance of seeking immediate care for serious issues.
CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER
“If something feels seriously wrong — chest pain, difficulty breathing, a severe allergic reaction, thoughts of self-harm — go to the emergency department or call 988,” Ramaswamy advised. “Don’t wait for an AI to tell you it’s OK.”
The researchers noted that they support the use of AI to improve healthcare access, and that they didn’t conduct the study to “tear down the technology.”
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
“These tools can be genuinely useful for the right things — understanding a diagnosis you’ve already received, looking up what your medications do and their side effects, or getting answers to questions that didn’t get fully addressed in a short doctor’s visit,” Ramaswamy said.
“That’s a very different use case from deciding whether you need emergency care. Treat them as a complement to your doctor, not a replacement.”
“This study doesn’t mean we abandon AI in healthcare.”
Castro agreed that the benefits of AI health tools should be weighed against the risks.
“AI health tools can increase access, reduce unnecessary visits and empower patients with information,” he said. “They are not inherently unsafe, but they are not yet substitutes for clinical judgment.”
TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ
“This study doesn’t mean we abandon AI in healthcare,” he went on. “It means we mature it. Independent testing and stronger guardrails will determine whether AI becomes a safety net or a liability.”
Health
Could At-Home Brain Stimulation Reduce Psychiatry’s Reliance on S.S.R.I.s?
“Our brains are so pharmaceutically inclined,” he said. “This fits into the model of pills.”
At the same time, tDCS could also challenge the current, pill-centric paradigm, by pushing psychiatrists to go beyond old notions of serotonin deficiencies and chemical imbalances, and to think more broadly about getting the brain unstuck. The two treatments together, research suggests may work together to nudge the brain toward a more plastic, activated state to help people overcome old patterns.
For instance, Dr. Somayya Kajee, a psychiatrist in Norwich, England, has found that tDCS helped some of her patients taper off an antidepressant or avoid having to start on another one. She added she has successfully used Flow to treat her neurodivergent patients who were taking medication for A.D.H.D. or autism, and who did not want to add on an S.S.R.I.
Ms. Davies started tDCS a few weeks after increasing her Prozac dosage. When she first put the headset on for 30 minutes, the recommended interval, she recalled feeling only a slight tingling — a “spicy sensation,” similar to having your hair bleached, as a participant in a clinical trial put it.
But within a few days, something shifted for Ms. Davies. She felt clearer, she said. The harsh voice in her head quieted. It was as if the world was in color again.
She said she could not say for sure what made the difference — the tDCS, delayed effects of the antidepressant, the passage of time or some combination — but “whatever it was helped to make me think, ‘Actually, maybe I can do this,’” she said. For the first time, she looked forward to giving her baby a bath.
Health
Dementia risk rises with common food type millions eat every day, study suggests
NEWYou can now listen to Fox News articles!
It’s well-known that ultraprocessed foods (UPFs) are not good for overall health — but new research has uncovered further evidence that this diet could negatively impact the brain.
The study, published in the journal Alzheimer’s and Dementia by the Alzheimer’s Association, revealed that UPFs are linked to more than 30 adverse health outcomes, including several dementia risk factors, like cardiovascular disease, type 2 diabetes and obesity.
Researchers from Australia’s Monash University analyzed more than 2,000 dementia-free Australian adults between the ages of 40 and 70, comparing their diets to cognitive function.
BLOOD PRESSURE AND DEMENTIA RISK SHARE SURPRISING LINK, STUDY SUGGESTS
They found that each 10% increase in UPF intake was associated with lower attention scores and higher dementia risk, regardless of whether the adults typically followed a healthy diet, like the Mediterranean diet.
There was no significant link found between UPF consumption and memory.
Each 10% increase in ultraprocessed food intake was associated with lower attention scores and higher dementia risk, the study found. (iStock)
By identifying food processing as a contributor to poorer cognition, the study “supports the need to refine dietary guidelines,” the researchers concluded.
DR NICOLE SAPHIER ON ULTRAPROCESSED FOODS IN AMERICA: ‘PEOPLE PROFIT OFF ADDICTION’
As the data was self-reported, this could pose a limitation to the strength of the findings, the team noted.
In an interview with Fox News Digital, Dr. Daniel Amen, a California-based psychiatrist and founder of Amen Clinics, discussed how diet has a “powerful impact” on the brain.
“[The brain] uses about 20% of the calories you consume, so the quality of those calories matters,” Dr. Daniel Amen told Fox News Digital. (iStock)
“Your brain is an energy-hungry organ,” he said. “It uses about 20% of the calories you consume, so the quality of those calories matters.”
Food is either “medicine or poison,” according to the doctor, who called out ultraprocessed foods like packaged snacks, soft drinks and ready-made meals that tend to be higher in sugar, unhealthy fats, additives and low-quality ingredients.
DEMENTIA RISK FOR PEOPLE 55 AND OLDER HAS DOUBLED, NEW STUDY FINDS
These foods can promote inflammation, insulin resistance, poor blood flow and oxidative stress, all of which are “bad for the brain,” according to Amen.
The brain expert noted that the study revealed even a 10% increase in ultraprocessed food intake – equivalent to roughly a pack of chips per day – was linked to a “measurable drop in attention, even when people had otherwise healthy diets.”
About one package of chips per day can result in cognition changes, according to the study findings. (iStock)
“Attention is the gateway to learning, memory, decision-making and problem-solving,” Amen said. “If you can’t focus, you can’t fully encode information.”
CLICK HERE FOR MORE HEALTH STORIES
The “big takeaway,” according to the doctor, is to “love foods that love you back.”
“You may love the taste of chips, cookies and candy, but they don’t love you (or your brain) back,” he said. “Ultraprocessed foods may claim to be sugar-free, low-carb or keto-friendly, but researchers noted that ultraprocessing can destroy the natural structure of food – and can introduce additives or processing chemicals that may affect cognition.”
CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER
Amen suggests sticking to real food that grows on plants or animals, instead of food “made in plants.”
“Build meals around colorful vegetables and fruits, clean protein, healthy fats, nuts, seeds and high-fiber carbohydrates,” he recommended. “Start by replacing one ultraprocessed food per day with a brain-healthy option.”
That might mean swapping out chips for nuts, soda for water or unsweetened green tea, and packaged sweets for berries. “Small choices done consistently can change your brain and your life,” the doctor emphasized.
As UPFs have been shown to worsen several dementia risk factors, Amen stressed that people at risk of cognitive decline should “get serious about prevention as early as possible.”
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
“If you have a family history of dementia, memory concerns, diabetes, high blood pressure or weight issues, your diet is not a side issue – it’s a primary brain-health intervention,” Amen said.
“Remember, you’re not stuck with the brain you have. You can make it better, and it starts with the next bite.”
TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ
Fox News Digital reached out to the study researchers for comment.
Health
A Healthy ‘Hyperfixation Meal’ Helps You Lose Weight Faster—Without Dieting
Use left and right arrow keys to navigate between menu items.
Use escape to exit the menu.
Sign Up
Create a free account to access exclusive content, play games, solve puzzles, test your pop-culture knowledge and receive special offers.
Already have an account? Login
-
Detroit, MI24 minutes agoAtlanta 5, Detroit 2: Adding injuries to insult
-
San Francisco, CA36 minutes agoNew cell tower being built in San Francisco neighborhood despite pushback
-
Dallas, TX42 minutes agoMIN@DAL Postgame: Miro Heiskanen | Dallas Stars
-
Miami, FL48 minutes agoHighway 41 fire burns thousands of acres, threatens structures in west Miami-Dade
-
Boston, MA54 minutes agoFancy Hats Can Be Cool
-
Denver, CO60 minutes agoDaily Horoscope for April 29, 2026
-
Seattle, WA1 hour agoWEST SEATTLE ART: Pre-World Cup mural
-
San Diego, CA1 hour agoMachado out of lineup day after early exit; Cronenworth gets first career start at third base