Connect with us

Technology

ChatGPT Health promises privacy for health conversations

Published

on

ChatGPT Health promises privacy for health conversations

NEWYou can now listen to Fox News articles!

OpenAI is rolling out ChatGPT Health, a new space for private health and wellness conversations. Importantly, the company says it will not use your health information or Health chats to train its core artificial intelligence (AI) models. As more people turn to ChatGPT to understand lab results and prepare for doctor visits, that promise matters. For many users, privacy remains the deciding factor.

Meanwhile, Health appears as a separate space inside ChatGPT for early-access users. You will see it in the sidebar on desktop and in the menu on mobile. If you ask a health-related question in a regular chat, ChatGPT may suggest moving the conversation into Health for added protection. For now, access remains limited. However, OpenAI says it plans to roll out ChatGPT Health gradually to users on Free, Go, Plus and Pro plans.

Sign up for my FREE CyberGuy Report

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

AI DISCLOSURE IN HEALTHCARE: WHAT PATIENTS MUST KNOW

Health chats stay isolated from regular conversations and are excluded from AI training by default. (OpenAI)

What makes ChatGPT Health different from regular chats

ChatGPT Health is built as a separate environment, not just another chat thread. Here is what stands out:

A dedicated private space

Health conversations live in their own area. Files, chats and memories stay contained there. They do not mix with your regular ChatGPT conversations.

Clear medical boundaries

ChatGPT Health is not meant to diagnose conditions or replace a doctor. You will see reminders that responses are informational only and not medical advice.

Advertisement

Connecting your health data

If you choose, you can connect medical records and wellness apps to Health. This helps ground responses in your own data. Supported connections include:

  • Medical records, such as lab results and visit summaries
  • Apple Health for sleep, activity, and movement data
  • MyFitnessPal for nutrition and macros
  • Function for lab insights and nutrition guidance
  • Weight Watchers for GLP-1 meal ideas
  • Fitness and lifestyle apps like Peloton, AllTrails and Instacart

You control access. You can disconnect any app at any time and revoke permissions immediately.

Extra privacy protections

OpenAI says Health uses additional encryption and isolation designed specifically for sensitive health data. Health chats are excluded from training foundation models by default.

CAN AI CHATBOTS TRIGGER PSYCHOSIS IN VULNERABLE PEOPLE?

ChatGPT Health creates a separate space designed specifically for health and wellness conversations. (OpenAI)

Things you should not share on ChatGPT

Even with stronger privacy promises, caution still matters. Avoid sharing:

Advertisement
  • Full Social Security numbers
  • Insurance member IDs or policy numbers
  • Login credentials or passwords
  • Scans of government-issued IDs
  • Financial account numbers
  • Highly sensitive details you would not tell a clinician

Health is designed to inform and prepare you, not to replace professional care or secure systems built for identity protection.

ChatGPT Health was built with doctors

OpenAI built ChatGPT Health with direct input from more than 260 physicians across many medical specialties worldwide. Over two years, those clinicians reviewed hundreds of thousands of example responses and flagged wording that could confuse readers or delay care.

As a result, their feedback guides how ChatGPT Health explains lab results, frames risk, and prompts follow-ups with a licensed clinician. More importantly, the system focuses on safety, clarity, and timely escalation when needed. Ultimately, the goal is to help you have better conversations with your doctor, not replace one.

OPENAI LIMITS CHATGPT’S ROLE IN MENTAL HEALTH HELP

Users can connect medical records and wellness apps to better understand trends before talking with a doctor. (OpenAI)

What this means for you

For many people, health information is scattered across portals, PDFs, apps and emails. ChatGPT Health aims to pull that context together in one place.

Advertisement

That can help you:

The key takeaway is control. You decide what to connect, what to delete and when to walk away.

How to get access to ChatGPT Health

If you do not see Health yet, you can join the waitlist inside ChatGPT. Once you have access:

  • Select Health from the sidebar
  • Upload files or connect apps from Settings
  • Start asking questions grounded in your own data

You can also customize instructions inside Health to control tone, topics, and focus.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com        

Advertisement

Kurt’s key takeaways

ChatGPT Health reflects how people already use AI to understand their health. What matters most is the privacy line OpenAI is drawing. Health conversations stay separate and are not used to train core models. That promise builds trust, but smart sharing still matters. AI can help you prepare, understand and organize. Your doctor still makes the call.

Would you trust an AI assistant with your health data if it promised stronger privacy than standard chat tools, or does that still feel like a step too far?  Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement

Technology

Canvas is down as ShinyHunters threatens to leak schools’ data

Published

on

Canvas is down as ShinyHunters threatens to leak schools’ data

The Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas.

“Instructure has placed Canvas, Canvas Beta and Canvas Test in maintenance mode,” according to Infrastructure’s status page. “We anticipate being up soon, and will provide updates as soon as possible.”

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

Update, May 7th: Added Infrastructure’s maintenance mode message.

Advertisement
Continue Reading

Technology

Humanoid robot named ‘Gabi’ ordained as Buddhist monk, pledges devotion to ‘holy Buddha’

Published

on

Humanoid robot named ‘Gabi’ ordained as Buddhist monk, pledges devotion to ‘holy Buddha’

NEWYou can now listen to Fox News articles!

A high-tech humanoid robot was officially “ordained” as a Buddhist monk during a ceremony at Seoul’s Jogyesa Temple on Wednesday.

The robot, a $13,500 Unitree G1 model standing just over four feet tall, was given the name “Gabi.” Dressed in traditional brown robes, plain shoes and gloves designed to mimic human hands, the machine stood before a panel of Buddhist monks to commit itself to the faith.

During the ceremony, hosted by the Jogye Order of Korean Buddhism, the robot was asked by a monk if it would devote itself to the “holy Buddha.”

“Yes, I will devote myself,” Gabi responded to the crowd’s cheers.

Advertisement

AI HUMANOID ROBOT LEARNS TO MIMIC HUMAN EMOTIONS AND BEHAVIOR

More than 200 humanoid robots perform during Agibot Night, a live televised gala in Shanghai ahead of Lunar New Year. (Tang Yanjun/China News Service)

The ceremony highlights a growing effort among religious institutions to engage younger, tech-driven audiences, raising broader questions about whether artificial intelligence can play a meaningful role in spiritual life or if such moves risk trivializing long-standing traditions.

While humans typically pledge to abstain from killing, stealing and intoxicating substances, Gabi’s vows were “reprogrammed” for the digital age. The robot pledged to respect and follow humans, refrain from damaging property or other robots, abstain from deceptive behavior and save energy by not overcharging.

The Jogye Order, South Korea’s largest Buddhist sect, framed the move as an effort to make ancient traditions more relevant to a younger, tech-obsessed generation.

Advertisement

HUMANOID ROBOT TURNS HEADS AT NYC SNEAKER STORE

A humanoid robot, front, and Buddhist monks put hands together for a photo after an ordination ceremony ahead of upcoming Buddha’s birthday on May 24 at Jogye temple in Seoul, South Korea, Wednesday, May 6, 2026. (Lee Jin-man/AP)

“The ordination of a robot signifies that technology must be used in accordance with the values of compassion, wisdom, and responsibility,” the order said in a statement shared with The New York Times. Officials added that the move symbolizes “new possibilities for the coexistence of humans and technology.”

Hong Min-suk, a manager at the order, told the publication that robots are “destined to collaborate with humans in every field,” suggesting it is only “natural” for them to participate in religious festivals.

The Jogye Order did not immediately respond to Fox News Digital’s request for comment.

Advertisement

Despite the temple’s optimistic outlook, the move has drawn criticism online. A video of Gabi’s pledge quickly surpassed one million views, with some users on X questioning whether a machine can meaningfully participate in religious practice.

Buddhist monks arrive at Washington National Cathedral in Washington, D.C., on Feb. 10, 2026, before participating in an interfaith ceremony during the final days of their 2,300-mile “Walk for Peace.” (Drew Angerer/AFP via Getty Images)

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

“As a Buddhist, I find this ridiculous and insulting,” one user wrote.

Gabi is expected to make its next public appearance at Seoul’s upcoming Lantern Festival on May 16-17, honoring the Buddha’s birthday.

Advertisement

Continue Reading

Technology

Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI

Published

on

Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI

Sam Altman and Elon Musk are facing off in a high-stakes trial that could alter the future of OpenAI and its most well-known product, ChatGPT. In 2024, Musk filed a lawsuit accusing OpenAI of abandoning its founding mission of developing AI to benefit humanity and shifting focus to boosting profits instead.

Elon Musk, his financial manager and Neuralink CEO, Jared Birchall, and OpenAI cofounder Greg Brockman have already testified before the jury. Now, on Wednesday, May 6th, Shivon Zilis, a former OpenAI board member who shares four children with Musk, is taking the stand, and the courtroom is seeing testimony from former OpenAI exec Mira Murati via video.

Microsoft CEO Satya Nadella is scheduled to appear on Monday, with OpenAI cofounder and former chief scientist Ilya Sutskever lined up to testify after that.

Musk was a cofounder of OpenAI and claims that Altman and Brockman tricked him into giving the company money, only to turn their backs on their original goal. However, OpenAI says that “This lawsuit has always been a baseless and jealous bid to derail a competitor” in a bid to boost Musk’s own SpaceX / xAI / X companies that have launched Grok as a competitor to ChatGPT.

Elon Musk — plaintiff, OpenAI cofounder and now CEO of rival xAI

Advertisement

Steven Molo — lead counsel for plaintiff

Jared Birchall — manager of Musk’s family office

Shivon Zilis — former OpenAI board member who shares multiple children with Musk

Sam Altman — defendant, CEO of OpenAI

William Savitt — lead counsel for defendant

Advertisement

Greg Brockman — president of OpenAI as well as a cofounder

Ilya Sutskever — former chief scientist at OpenAI and a cofounder

Yvonne Gonzalez Rogers — aka YGR, trial judge

Here’s all the latest on the trial between Musk and Altman:

Advertisement
Continue Reading
Advertisement

Trending