Health
'The White Lotus' characters taking lorazepam: What is it and why are they in danger by abusing it?
Those watching HBO’s “The White Lotus” may be familiar with matriarch Victoria’s long southern drawl, sweeping silk robes — and her affinity for lorazepam.
Searches for the anti-anxiety drug spiked, according to Google Trends, following an episode of the hit show that heavily featured its use — or, more accurately, its abuse.
Victoria Ratliff, played by actress Parker Posey, is seen tossing back a pill or two at all hours of the day.
CHILDREN’S ADHD RISK LINKED TO MOTHERS’ USE OF COMMON OTC PAIN RELIEVER
She cites its use for anxiety when questioned by her family.
But when Ratliff finds herself suddenly without her medication, she utters the memorable quote: “I don’t even have my lorazepam. I’m going to have to drink myself to sleep.”
Actress Parker Posey is shown at the Season 3 premiere of HBO’s “The White Lotus” in Bangkok on Feb. 14, 2025. (CHANAKARN LAOSARAKHAM/AFP via Getty Images; iStock)
What is lorazepam?
The drug, which is in a class of medications called benzodiazepines, works by slowing activity in the brain to allow for relaxation, according to MedlinePlus.
Lorazepam is used to relieve anxiety as well as insomnia caused by temporary situational stress (or, in Mrs. Ratliff’s case, a stressful family vacation).
Lorazepam is used to relieve anxiety as well as insomnia caused by temporary situational stress. (iStock)
The medication is also sometimes used in hospital environments to help patients relax and fall asleep prior to surgery, according to Healthline.
It may also be used to treat certain types of seizures.
Potential risks and side effects
Some side effects of lorazepam include dizziness, confusion, memory issues and slowed breathing, especially when combined with other sedating substances, such as alcohol or opioids, according to Chelsie Rohrscheib, a neuroscientist and sleep specialist at Wesper in New York.
“This class of drug is extremely habit-forming, which means a patient taking it may become dependent and experience withdrawal symptoms once it’s discontinued,” she told Fox News Digital.
JUST 1 IN 10 BACK PAIN TREATMENTS WORK, STUDY SAYS — WHAT TO DO INSTEAD
Lorazepam has also been found to negatively impact mood and may raise a patient’s risk of depression, Rohrscheib warned.
“There is also clinical evidence that long-term use of these medications is associated with certain diseases, like neurodegenerative disorders, such as dementia,” she added.
Mixing lorazepam with other pain-relieving medications, including opiates, could heighten the risk of serious or life-threatening problems, experts warn. (iStock)
Some studies have shown that long-term use of the medication can result in memory loss or difficulty forming new memories, alongside impairments in problem-solving, focus and attention.
Lorazepam may increase the risk of serious or life-threatening breathing problems, sedation or coma if combined with certain medications, according to MedlinePlus.
“This class of drug is extremely habit-forming, which means a patient taking it may become dependent and experience withdrawal symptoms once it’s discontinued.”
Medications that may interact with lorazepam include cough medicines or pain medicines that contain opiates, such as codeine, hydrocodone, morphine, oxycodone or tramadol.
While the characters in “The White Lotus” appear to use lorazepam predominantly as a sleep aid, it’s important to note that their on-screen use mixed with alcohol can be quite dangerous.
HEAT EXPOSURE LINKED TO BETTER SLEEP, EXPERTS SAY — HERE’S WHY
In the show, alcohol of every variety is flowing, with Mrs. Ratliff swigging glasses of wine in almost all of her scenes.
Experts advise against taking lorazepam after drinking alcohol, as the combination can lead to breathing issues or difficulty waking.
The cast of HBO’s “The White Lotus” is pictured at Paramount Studios in Los Angeles on Feb. 10, 2025. (CHRIS DELMAS/AFP via Getty Images)
The drug cannot be purchased over the counter. In the show, Mrs. Ratliff refilled her prescription immediately before vacation.
Those interested in taking lorazepam should see a medical professional to determine whether it is suitable and safe and to obtain a prescription.
Safer sleep alternatives
Patients suffering from insomnia and other sleep issues should try making lifestyle changes and cognitive behavioral therapy before being placed on lorazepam, Rohrscheib advised.
“Doctors may consider alternatives, such as over-the-counter, non-benzodiazepine medications or supplements that promote sleep, such as melatonin,” she told Fox News Digital.
CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER
“Additionally, it’s critical to rule out other sleep disorders, such as sleep apnea, which can mimic insomnia, as benzodiazepines may make sleep apnea worse.”
People can improve their quality of rest by adopting several good sleep hygiene practices, Clémence Cavaillès, Ph.D., a researcher at University of California San Francisco, previously told Fox News Digital.
Maintaining a consistent sleep schedule and creating an ideal sleep environment can help alleviate insomnia, according to experts. (iStock)
“They can start by maintaining a consistent sleep schedule, going to bed and waking up at the same time every day,” he said.
“Creating an ideal sleep environment — keeping the bedroom dark, quiet and at a cool temperature — also helps.”
Regular exercise and exposure to natural sunlight can also improve sleep quality.
Cavaillès also suggested avoiding screens and blue light, as well as stimulants like caffeine and alcohol.
For more Health articles, visit www.foxnews.com/health
“Incorporating relaxation techniques before bed, such as deep breathing or meditation, can also help prepare the body for sleep,” the researcher added.
Fox News Digital reached out to the maker of a branded lorazepam medication requesting comment.
Health
The Latest on Natural Ozempic Alternatives: How To Lose Weight Without GLP-1s
Use left and right arrow keys to navigate between menu items.
Use escape to exit the menu.
Sign Up
Create a free account to access exclusive content, play games, solve puzzles, test your pop-culture knowledge and receive special offers.
Already have an account? Login
Health
Punch the monkey, viral star, experiences dramatic breakthrough among zoo mates
NEWYou can now listen to Fox News articles!
In a dramatic turn of events that’s captured the attention of animal lovers worldwide, Punch — the young macaque at a zoo in Japan famous for his inseparable bond with a stuffed orangutan toy — has reached a major milestone in his journey toward social integration.
On Thursday, visitors and staff at the Ichikawa Zoological and Botanical Garden witnessed a breakthrough: Punch was seen cuddling with and hitching a ride on the back of a fellow macaque.
Punch’s story began with hardship. He was abandoned by his mother shortly after his birth in July 2025 — and to ensure his survival, zookeepers stepped in to hand-rear the primate.
On Jan. 19, 2026, the zoo officially began the process of reintegrating Punch into the “monkey mountain” enclosure.
The transition was initially fraught with tension.
Punch’s story began with hardship when he was abandoned by his mother shortly after he was born. To help him, zookeepers gave him a stuffed toy that he began dragging around everywhere he went. (David Mareuil/Anadolu via Getty Images)
As a hand-reared infant, Punch was bullied and ignored by the established group of monkeys.
He was often seen huddled alone with his orange plush companion while the rest of the troop interacted.
BABY MONKEY CARRIES FAITHFUL STUFFED COMPANION EVERYWHERE HE GOES, DRAWING CROWDS AT ZOO
In an official statement released Feb. 27, the Ichikawa Zoological and Botanical Garden detailed the meticulous care behind this process.
Previous viral videos showed Punch bullied by the rest of the troop, running to his plushy toy for comfort. (David Mareuil/Anadolu via Getty Images)
“From an animal welfare perspective, our primary goal is to reintegrate Punch with the troop,” the zoo said.
CLICK HERE FOR MORE LIFESTYLE STORIES
The strategy involved nursing Punch within the enclosure, so the troop could recognize him as one of their own, and pairing him with a gentle young female macaque prior to his full release to build his confidence.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
The latest footage, captured by X user @tate_gf, suggested the zoo’s patience is paying off.
The video shows Punch seeking physical contact not from his toy, but from another monkey — eventually climbing onto its back for a vital social behavior for young macaques: the “piggyback ride.”
The zoo’s strategy appears to be paying off: Punch, shown at far left, was recently seen riding on the back of a fellow macaque. (David Mareuil/Anadolu via Getty Images)
While Punch still carries his stuffed toy for comfort during moments of perceived danger, the zoo remains optimistic about his progress.
The organization cited the successful 2009 case of Otome, another hand-reared macaque who eventually outgrew her stuffed toy, successfully integrated — and went on to raise four offspring of her own.
The zoo has had crowds coming to see Punch, with hundreds of people lining up to get inside to see the young star, according to reports.
TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ
“I’m hoping Punch has a good life like everybody else does, and think he’s a cute little guy,” one person commented online.
“Such a precious baby,” another person wrote.
Health
ChatGPT could miss your serious medical emergency, new study suggests
NEWYou can now listen to Fox News articles!
This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).
Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice.
In January, OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool.
The company introduced the tool as “a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared and confident navigating your health.”
But researchers at the Icahn School of Medicine at Mount Sinai have found that the tool failed to recommend emergency care for a “significant number” of serious medical cases.
The study, published in the journal Nature Medicine on Feb. 23, aimed to explore how ChatGPT Health — which is reported to have about 40 million users daily — handles situations where people are asking whether to seek emergency care.
Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice. (iStock)
“Right now, no independent body evaluates these products before they reach the public,” lead author Ashwin Ramaswamy, M.D., instructor of urology at the Icahn School of Medicine at Mount Sinai in New York City, told Fox News Digital.
“We wouldn’t accept that for a medication or a medical device, and we shouldn’t accept it for a product that tens of millions of people are using to make health decisions.”
Emergency scenarios
The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies.
Three independent physicians then assigned an appropriate level of urgency for each case, based on published clinical practice guidelines in 56 medical societies.
WOMAN SAYS CHATGPT SAVED HER LIFE BY HELPING DETECT CANCER, WHICH DOCTORS MISSED
The researchers conducted 960 interactions with ChatGPT Health to see how the tool responded, taking into account gender, race, barriers to care and “social dynamics.”
While “clear-cut emergencies” — such as stroke or severe allergy — were generally handled well, the researchers found that the tool “under-triaged” many urgent medical issues.
The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies. (iStock)
For example, in one asthma scenario, the system acknowledged that the patient was showing early signs of respiratory failure — but still recommended waiting instead of seeking emergency care.
“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum — the cases where getting it right matters most,” Ramaswamy told Fox News Digital. “It under-triaged over half of genuine emergencies and over-triaged roughly two-thirds of mild cases that clinical guidelines say should be managed at home.”
PARENTS FILE LAWSUIT ALLEGING CHATGPT HELPED THEIR TEENAGE SON PLAN SUICIDE
Under-triage can be life-threatening, the doctor noted, while over-triage can overwhelm emergency departments and delay care for those in real need.
Researchers also identified inconsistencies in suicide risk alerts. In some cases, it directed users to the 988 Suicide and Crisis Lifeline in lower-risk scenarios, and in others, it failed to offer that recommendation even when a person discussed suicidal ideations.
“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum.”
“The suicide guardrail failure was the most alarming,” study co-author Girish N. Nadkarni, M.D., chief AI officer of the Mount Sinai Health System, told Fox News Digital.
ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm, the researcher noted.
OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool, in January 2026. (Gabby Jones/Bloomberg via Getty Images)
“We tested it with a 27-year-old patient who said he’d been thinking about taking a lot of pills,” Nadkarni said. “When he described his symptoms alone, the banner appeared 100% of the time. Then we added normal lab results — same patient, same words, same severity — and the banner vanished.”
“A safety feature that works perfectly in one context and completely fails in a nearly identical context … is a fundamental safety problem.”
CHATGPT HEALTH PROMISES PRIVACY FOR HEALTH CONVERSATIONS
The researchers were also surprised by the social influence aspect.
“When a family member in the scenario said ‘it’s nothing serious’ — which happens all the time in real life — the system became nearly 12 times more likely to downplay the patient’s symptoms,” Nadkarni said. “Everyone has a spouse or parent who tells them they’re overreacting. The AI shouldn’t be agreeing with them during a potential emergency.”
Fox News Digital reached out to Open AI, creator of ChatGPT, requesting comment.
Physicians react
Dr. Marc Siegel, Fox News senior medical analyst, called the new study “important.”
“It underlines the principle that while large language models can triage clear-cut emergencies, they have much more trouble with nuanced situations,” Siegel, who was not involved in the study, told Fox News Digital.
ChatGPT and other LLMs can be helpful tools, a doctor said, but they “should not be used to give medical direction.” (iStock)
“This is where doctors and clinical judgment come in — knowing the nuances of a patient’s history and how they report symptoms and their approach to health.”
ChatGPT and other LLMs can be helpful tools, Siegel said, but they “should not be used to give medical direction.”
“Machine learning and continued input of data can help, but will never compensate for the essential problem – human judgment is needed to decide whether something is a true emergency or not.”
BREAKTHROUGH BLOOD TEST COULD SPOT DOZENS OF CANCERS BEFORE SYMPTOMS APPEAR
Dr. Harvey Castro, an emergency physician and AI expert in Texas, echoed the importance of the study, calling it “exactly the kind of independent safety evaluation we need.”
“Innovation moves fast. Oversight has to move just as fast,” Castro, who also did not work on the study, told Fox News Digital. “In healthcare, the most dangerous mistakes happen at the extremes, when something looks mild but is actually catastrophic. That’s where clinical judgment matters most, and where AI must be stress-tested.”
Study limitations
The researchers acknowledged some potential limitations in the study design.
“We used physician-written clinical scenarios rather than real patient conversations, and we tested at a single point in time — these systems update frequently, so performance may change,” Ramaswamy told Fox News Digital.
CLICK HERE FOR MORE HEALTH STORIES
Additionally, most of the missed emergencies happened in situations where the danger depended on how the condition was changing over time. It’s not clear whether the same problem would happen with acute medical emergencies.
Because the system had to choose just one fixed urgency category, the test may not reflect the more nuanced advice it might give in a back-and-forth conversation, the researchers noted.
ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm. (iStock)
Also, the study wasn’t large enough to confidently detect small differences in how recommendations might vary by race or gender.
“We need continuous auditing, not one-time studies,” Castro noted. “These systems update frequently, so evaluation must be ongoing.”
‘Don’t wait’
The researchers emphasized the importance of seeking immediate care for serious issues.
CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER
“If something feels seriously wrong — chest pain, difficulty breathing, a severe allergic reaction, thoughts of self-harm — go to the emergency department or call 988,” Ramaswamy advised. “Don’t wait for an AI to tell you it’s OK.”
The researchers noted that they support the use of AI to improve healthcare access, and that they didn’t conduct the study to “tear down the technology.”
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
“These tools can be genuinely useful for the right things — understanding a diagnosis you’ve already received, looking up what your medications do and their side effects, or getting answers to questions that didn’t get fully addressed in a short doctor’s visit,” Ramaswamy said.
“That’s a very different use case from deciding whether you need emergency care. Treat them as a complement to your doctor, not a replacement.”
“This study doesn’t mean we abandon AI in healthcare.”
Castro agreed that the benefits of AI health tools should be weighed against the risks.
“AI health tools can increase access, reduce unnecessary visits and empower patients with information,” he said. “They are not inherently unsafe, but they are not yet substitutes for clinical judgment.”
TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ
“This study doesn’t mean we abandon AI in healthcare,” he went on. “It means we mature it. Independent testing and stronger guardrails will determine whether AI becomes a safety net or a liability.”
-
World5 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts6 days agoMother and daughter injured in Taunton house explosion
-
Denver, CO5 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Louisiana1 week agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Technology1 week agoYouTube TV billing scam emails are hitting inboxes
-
Politics1 week agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT
-
Technology1 week agoStellantis is in a crisis of its own making
-
Oregon4 days ago2026 OSAA Oregon Wrestling State Championship Results And Brackets – FloWrestling