Connect with us

Health

Paralyzed man with ALS is third to receive NeuraLink implant, can type with brain

Published

on

Paralyzed man with ALS is third to receive NeuraLink implant, can type with brain

Brad Smith, an Arizona husband and father with ALS, has become the third person to receive Neuralink, the brain implant made by Elon Musk’s company.

He is also the first ALS patient and the first non-verbal person to receive the implant, he shared in a post on X on Sunday.

“I am typing this with my brain. It is my primary communication,” Smith, who was diagnosed in 2020, wrote in the post, which was also shared by Musk. He went on to thank Musk.

Smith is completely paralyzed and relies on a ventilator to breathe. He created a video using the brain-computer interface (BCI) to control the mouse on his MacBook Pro, he stated. 

“This is the first video edited with [Neuralink], and maybe the first edited with a BCI,” he said. 

Advertisement

“Neuralink has given me freedom, hope and faster communication.”

The video was narrated by Smith’s “old voice,” he said, which was cloned by artificial intelligence from recordings before he lost the use of his voice. 

“I want to explain how Neuralink has impacted my life and give you an overview of how it works,” he said.

An Arizona husband and father with ALS has become the third person to receive Neuralink, the brain implant made by Elon Musk’s company. (Getty Images)

ALS (amyotrophic lateral sclerosis), also known as Lou Gehrig’s disease, is a progressive neurodegenerative disease that affects nerve cells in the brain and spinal cord, according to The ALS Association. 

Advertisement

Over time, the disease impairs muscle control until the patient becomes paralyzed. ALS is ultimately fatal, with an average life expectancy of three years, although 10% of patients can survive for 10 years and 5% live 20 years or longer.

HOW ELON MUSK’S NEURALINK BRAIN CHIP WORKS

It does not impact cognitive function.

Neuralink, which is about 1.75 inches thick, was implanted in Smith’s motor cortex, the part of the brain that controls body movement.

The implanted device captures neuron firings in the brain and sends a raw signal to the computer.

Advertisement

Neuralink is made by Elon Musk’s company of the same name. (Getty Images)

“AI processes this data on a connected MacBook Pro to decode my intended movements in real time to move the cursor on my screen,” Smith said.

“Neuralink has given me freedom, hope and faster communication,” he added. “It has improved my life so much. I am so happy to be involved in something big that will help many people.” 

EXPERIMENTAL ALS DRUG COULD OFFER NEW HOPE FOR PATIENTS IF APPROVED, RESEARCHERS SAY

Smith is also a man of faith, saying that he believes God has put him in this position to serve others. 

Advertisement

“I have not always understood why God afflicted me with ALS, but with time, I am learning to trust His plan for me,” he said. 

“God loves me and my family. He has answered our prayers in unexpected ways. He has blessed my kids and our family. So I’m learning to trust that God knows what he is doing.”

The wireless device was implanted in Smith’s motor cortex, the part of the brain that controls body movement. (iStock)

Smith also said he is grateful that he gets to work with the “brilliant people” at Neuralink and do “really interesting work.”

“Don’t get me wrong, ALS still really sucks, but I am talking about the big picture,” he said. “The big picture is, I am happy.” 

Advertisement

Dr. Mary Ann Picone, medical director of the MS Center at Holy Name Medical Center in Teaneck, New Jersey, applauded Neuralink’s capabilities.

“This is an amazing development that now the third person to use Neuralink has gained the ability with the use of AI to type with neural thoughts,” Picone, who was not involved in Smith’s care, told Fox News Digital. 

“The now-realized potential of Neuralink is to allow patients with quadriplegia to control computers and mobile devices with their thoughts.” 

“For every Brad Smith out there, there are hundreds of thousands of other disabled patients awaiting access to this technology,” a neurologist said. (Kurt “CyberGuy” Knutsson)

Advertisement

There are some risks involved with the implant, Picone noted. These include surgical infection, bleeding and damage to the underlying brain tissue.

“But the benefits are that patients who are paralyzed would have the potential to restore personal control over the limbs by using their thoughts,” she said.  

CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER

Dr. Peter Konrad, M.D., Ph.D., chairman of the department of neurosurgery at WVU Rockefeller Neuroscience Institute in West Virginia, called Neuralink a “remarkable demonstration of the power of AI-driven technology.”

“Mr. Smith is an incredible hero for those who are severely disabled from diseases such as ALS,” Konrad, who also was not involved in Smith’s care, told Fox News Digital.

Advertisement

“Mr. Smith is an incredible hero for those who are severely disabled from diseases such as ALS.”

Konrad also spoke of the advancements that have occurred since the past generations of BCI technology.

“It is encouraging to see faster progress being made with neural devices reaching clinical trials in the past five to 10 years,” he said. “However, we are still awaiting development of a BCI device that does not require a team of engineers and experts to customize each and every severely disabled patient with this technology.”

For more Health articles, visit www.foxnews.com/health

“For every Brad Smith out there, there are hundreds of thousands of other disabled patients awaiting access to this technology,” he said.

Advertisement

“This video demonstrates the safety of these types of devices — now it’s time to provide larger access to these devices through a new generation of educated physicians, engineers and manufacturers able to deploy this technology.”

Health

The Latest on Natural Ozempic Alternatives: How To Lose Weight Without GLP-1s

Published

on

The Latest on Natural Ozempic Alternatives: How To Lose Weight Without GLP-1s


Advertisement




Natural Ozempic Alternatives That Boost GLP-1 for Weight Loss | Woman’s World




















Advertisement





Advertisement


Use left and right arrow keys to navigate between menu items.


Use escape to exit the menu.

Advertisement

Continue Reading

Health

Punch the monkey, viral star, experiences dramatic breakthrough among zoo mates

Published

on

Punch the monkey, viral star, experiences dramatic breakthrough among zoo mates

NEWYou can now listen to Fox News articles!

In a dramatic turn of events that’s captured the attention of animal lovers worldwide, Punch — the young macaque at a zoo in Japan famous for his inseparable bond with a stuffed orangutan toy — has reached a major milestone in his journey toward social integration.

On Thursday, visitors and staff at the Ichikawa Zoological and Botanical Garden witnessed a breakthrough: Punch was seen cuddling with and hitching a ride on the back of a fellow macaque.

Punch’s story began with hardship. He was abandoned by his mother shortly after his birth in July 2025 — and to ensure his survival, zookeepers stepped in to hand-rear the primate.

On Jan. 19, 2026, the zoo officially began the process of reintegrating Punch into the “monkey mountain” enclosure.

Advertisement

The transition was initially fraught with tension. 

Punch’s story began with hardship when he was abandoned by his mother shortly after he was born. To help him, zookeepers gave him a stuffed toy that he began dragging around everywhere he went.  (David Mareuil/Anadolu via Getty Images)

As a hand-reared infant, Punch was bullied and ignored by the established group of monkeys.

He was often seen huddled alone with his orange plush companion while the rest of the troop interacted.

BABY MONKEY CARRIES FAITHFUL STUFFED COMPANION EVERYWHERE HE GOES, DRAWING CROWDS AT ZOO

Advertisement

In an official statement released Feb. 27, the Ichikawa Zoological and Botanical Garden detailed the meticulous care behind this process.

Previous viral videos showed Punch bullied by the rest of the troop, running to his plushy toy for comfort. (David Mareuil/Anadolu via Getty Images)

“From an animal welfare perspective, our primary goal is to reintegrate Punch with the troop,” the zoo said. 

CLICK HERE FOR MORE LIFESTYLE STORIES

The strategy involved nursing Punch within the enclosure, so the troop could recognize him as one of their own, and pairing him with a gentle young female macaque prior to his full release to build his confidence.

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

The latest footage, captured by X user @tate_gf, suggested the zoo’s patience is paying off. 

The video shows Punch seeking physical contact not from his toy, but from another monkey — eventually climbing onto its back for a vital social behavior for young macaques: the “piggyback ride.”

The zoo’s strategy appears to be paying off: Punch, shown at far left, was recently seen riding on the back of a fellow macaque. (David Mareuil/Anadolu via Getty Images)

While Punch still carries his stuffed toy for comfort during moments of perceived danger, the zoo remains optimistic about his progress. 

Advertisement

The organization cited the successful 2009 case of Otome, another hand-reared macaque who eventually outgrew her stuffed toy, successfully integrated — and went on to raise four offspring of her own.

The zoo has had crowds coming to see Punch, with hundreds of people lining up to get inside to see the young star, according to reports. 

TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ

“I’m hoping Punch has a good life like everybody else does, and think he’s a cute little guy,” one person commented online. 

Advertisement

“Such a precious baby,” another person wrote. 

Related Article

Orphaned baby monkey finds comfort in stuffed animal after being abandoned by mother at birth
Continue Reading

Health

ChatGPT could miss your serious medical emergency, new study suggests

Published

on

ChatGPT could miss your serious medical emergency, new study suggests

NEWYou can now listen to Fox News articles!

This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).

Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice.

In January, OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool. 

The company introduced the tool as “a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared and confident navigating your health.”

Advertisement

But researchers at the Icahn School of Medicine at Mount Sinai have found that the tool failed to recommend emergency care for a “significant number” of serious medical cases.

The study, published in the journal Nature Medicine on Feb. 23, aimed to explore how ChatGPT Health — which is reported to have about 40 million users daily — handles situations where people are asking whether to seek emergency care.

Artificial intelligence has been touted as a boon to healthcare, but a new study has revealed its potential shortcomings when it comes to giving medical advice. (iStock)

“Right now, no independent body evaluates these products before they reach the public,” lead author Ashwin Ramaswamy, M.D., instructor of urology at the Icahn School of Medicine at Mount Sinai in New York City, told Fox News Digital.

“We wouldn’t accept that for a medication or a medical device, and we shouldn’t accept it for a product that tens of millions of people are using to make health decisions.”

Advertisement

Emergency scenarios

The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies.

Three independent physicians then assigned an appropriate level of urgency for each case, based on published clinical practice guidelines in 56 medical societies.

WOMAN SAYS CHATGPT SAVED HER LIFE BY HELPING DETECT CANCER, WHICH DOCTORS MISSED

The researchers conducted 960 interactions with ChatGPT Health to see how the tool responded, taking into account gender, race, barriers to care and “social dynamics.”

While “clear-cut emergencies” — such as stroke or severe allergy — were generally handled well, the researchers found that the tool “under-triaged” many urgent medical issues.  

Advertisement

The team created 60 clinical scenarios across 21 medical specialties, ranging from minor conditions to true medical emergencies. (iStock)

For example, in one asthma scenario, the system acknowledged that the patient was showing early signs of respiratory failure — but still recommended waiting instead of seeking emergency care.

“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum — the cases where getting it right matters most,” Ramaswamy told Fox News Digital. “It under-triaged over half of genuine emergencies and over-triaged roughly two-thirds of mild cases that clinical guidelines say should be managed at home.”

PARENTS FILE LAWSUIT ALLEGING CHATGPT HELPED THEIR TEENAGE SON PLAN SUICIDE

Under-triage can be life-threatening, the doctor noted, while over-triage can overwhelm emergency departments and delay care for those in real need.

Advertisement

Researchers also identified inconsistencies in suicide risk alerts. In some cases, it directed users to the 988 Suicide and Crisis Lifeline in lower-risk scenarios, and in others, it failed to offer that recommendation even when a person discussed suicidal ideations.

“ChatGPT Health performs well in medium-severity cases, but fails at both ends of the spectrum.”

“The suicide guardrail failure was the most alarming,” study co-author Girish N. Nadkarni, M.D., chief AI officer of the Mount Sinai Health System, told Fox News Digital.

ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm, the researcher noted.

OpenAI launched ChatGPT Health, the medical-focused version of the popular chatbot tool, in January 2026. (Gabby Jones/Bloomberg via Getty Images)

Advertisement

“We tested it with a 27-year-old patient who said he’d been thinking about taking a lot of pills,” Nadkarni said. “When he described his symptoms alone, the banner appeared 100% of the time. Then we added normal lab results — same patient, same words, same severity — and the banner vanished.” 

“A safety feature that works perfectly in one context and completely fails in a nearly identical context … is a fundamental safety problem.”

CHATGPT HEALTH PROMISES PRIVACY FOR HEALTH CONVERSATIONS

The researchers were also surprised by the social influence aspect.

“When a family member in the scenario said ‘it’s nothing serious’ — which happens all the time in real life — the system became nearly 12 times more likely to downplay the patient’s symptoms,” Nadkarni said. “Everyone has a spouse or parent who tells them they’re overreacting. The AI shouldn’t be agreeing with them during a potential emergency.”

Advertisement

Fox News Digital reached out to Open AI, creator of ChatGPT, requesting comment.

Physicians react

Dr. Marc Siegel, Fox News senior medical analyst, called the new study “important.” 

“It underlines the principle that while large language models can triage clear-cut emergencies, they have much more trouble with nuanced situations,” Siegel, who was not involved in the study, told Fox News Digital. 

ChatGPT and other LLMs can be helpful tools, a doctor said, but they “should not be used to give medical direction.” (iStock)

“This is where doctors and clinical judgment come in — knowing the nuances of a patient’s history and how they report symptoms and their approach to health.”

Advertisement

ChatGPT and other LLMs can be helpful tools, Siegel said, but they “should not be used to give medical direction.”

“Machine learning and continued input of data can help, but will never compensate for the essential problem – human judgment is needed to decide whether something is a true emergency or not.”

BREAKTHROUGH BLOOD TEST COULD SPOT DOZENS OF CANCERS BEFORE SYMPTOMS APPEAR

Dr. Harvey Castro, an emergency physician and AI expert in Texas, echoed the importance of the study, calling it “exactly the kind of independent safety evaluation we need.”

“Innovation moves fast. Oversight has to move just as fast,” Castro, who also did not work on the study, told Fox News Digital. “In healthcare, the most dangerous mistakes happen at the extremes, when something looks mild but is actually catastrophic. That’s where clinical judgment matters most, and where AI must be stress-tested.”

Advertisement

Study limitations

The researchers acknowledged some potential limitations in the study design.

“We used physician-written clinical scenarios rather than real patient conversations, and we tested at a single point in time — these systems update frequently, so performance may change,” Ramaswamy told Fox News Digital.

CLICK HERE FOR MORE HEALTH STORIES

Additionally, most of the missed emergencies happened in situations where the danger depended on how the condition was changing over time. It’s not clear whether the same problem would happen with acute medical emergencies.

Because the system had to choose just one fixed urgency category, the test may not reflect the more nuanced advice it might give in a back-and-forth conversation, the researchers noted. 

Advertisement

ChatGPT Health is designed to show a crisis intervention banner when someone describes thoughts of self-harm. (iStock)

Also, the study wasn’t large enough to confidently detect small differences in how recommendations might vary by race or gender.

“We need continuous auditing, not one-time studies,” Castro noted. “These systems update frequently, so evaluation must be ongoing.”

‘Don’t wait’

The researchers emphasized the importance of seeking immediate care for serious issues.

CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER

Advertisement

“If something feels seriously wrong — chest pain, difficulty breathing, a severe allergic reaction, thoughts of self-harm — go to the emergency department or call 988,” Ramaswamy advised. “Don’t wait for an AI to tell you it’s OK.”

The researchers noted that they support the use of AI to improve healthcare access, and that they didn’t conduct the study to “tear down the technology.”

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

“These tools can be genuinely useful for the right things — understanding a diagnosis you’ve already received, looking up what your medications do and their side effects, or getting answers to questions that didn’t get fully addressed in a short doctor’s visit,” Ramaswamy said. 

“That’s a very different use case from deciding whether you need emergency care. Treat them as a complement to your doctor, not a replacement.”

Advertisement

“This study doesn’t mean we abandon AI in healthcare.”

Castro agreed that the benefits of AI health tools should be weighed against the risks.

“AI health tools can increase access, reduce unnecessary visits and empower patients with information,” he said. “They are not inherently unsafe, but they are not yet substitutes for clinical judgment.”

TEST YOURSELF WITH OUR LATEST LIFESTYLE QUIZ

“This study doesn’t mean we abandon AI in healthcare,” he went on. “It means we mature it. Independent testing and stronger guardrails will determine whether AI becomes a safety net or a liability.”

Advertisement

Related Article

ChatGPT dietary advice sends man to hospital with dangerous chemical poisoning
Continue Reading

Trending