Technology
Can AI chatbots trigger psychosis in vulnerable people?
NEWYou can now listen to Fox News articles!
Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.
Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
What psychiatrists are seeing in patients using AI chatbots
Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.
OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN
Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)
Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.
Why AI chatbot conversations feel different from past technology
Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating.
For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.
How AI chatbots can reinforce false or delusional beliefs
Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.
Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN
Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)
What research and case reports reveal about AI chatbots
Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.
A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.
What AI companies say about mental health risks
OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.
Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.
What this means for everyday AI chatbot use
Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.
I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS
Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)
Tips for using AI chatbots more safely
Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.
- Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
- Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
- Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
- Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
- Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.
If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.
As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Govee’s new LED Lightwall comes with its own self-standing frame
Govee has announced an upgraded version of its hanging Curtain Lights Pro that can instead be used nearly anywhere you have access to an outlet or large battery. At $449.99, Govee’s new Lightwall is more than twice as expensive as the $199.99 Curtain Lights Pro, but comes with more LEDs in a denser array and a self-standing aluminum frame that can be assembled in 10 to 15 minutes without the need for any tools.
When hung from its stand the Lightwall measures 7.9 feet wide and 5.3 feet tall and features 1,536 color-changing LEDs spaced about 1.96 inches apart in a 48 x 32 grid. It’s water-resistant, and with the ability to refresh at up to 35fps the Lightwall almost sounds like it could be used as a personal backyard Jumbotron, but it’s not designed for watching TV or movies.
The Lightwall instead connects to Govee’s Home app where you can select from over 200 preset scenes and simple animations, choose from 10 different music modes that generate lighting patterns matched to beats, or synchronize its colors to other Govee lighting products to create a cohesive mood.
The app can also use AI to create custom animated GIFs from simple text prompts, or you can take matters into your own hands and create custom designs by sketching in the app with your finger and stacking up to 30 layers of doodles. The Lightwall is smart home compatible and supports Matter, too, so in addition to managing it through Govee’s app you can control it using voice commands through smart devices with Google Assistant or Amazon Alexa.
Technology
Roblox adds age-based accounts for kids and teens
‘Fox & Friends’ exclusive: Roblox CEO announces new safety measures for kids
Roblox Co-founder and CEO Dave Baszucki details new safety measures, including Kids and Select accounts, on Fox & Friends. He addresses lawsuits and concerns about predators, emphasizing age verification, content filtering, and strict communication controls to protect users. Baszucki states Roblox has “no tolerance” for bad actors and builds safety by default, allowing parents to customize chat settings for their children.
NEWYou can now listen to Fox News articles!
If your child plays Roblox, they are part of a massive global audience. Roblox has reported more than 144 million daily active users, with a large share made up of kids and teens who log in to play games, create content and connect with friends. That reach is exactly why a new change rolling out in early June matters.
Roblox is introducing two new account types designed to better match what kids play and who they can talk to based on age. The shift centers on structure. Instead of one shared experience with layered controls, Roblox is building separate environments for different age groups. As a result, content, chat and parental controls will adjust automatically as a child grows.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN
Roblox rolls out a new AI system that analyzes entire scenes in real time to detect harmful content across its platform. (Brent Lewin/Bloomberg via Getty Images)
What are Roblox Kids and Roblox Select accounts?
Roblox is dividing younger users into two groups, each with its own rules and experience.
Roblox Kids (ages 5 to 8)
This is the most restricted environment. It is designed for younger children who need tighter guardrails.
- Access limited to games rated Minimal or Mild
- Only games that pass a three-step review process
- Chat is turned off by default
- A distinct visual design so parents can easily recognize the account
The idea here is simple. Kids see a limited version of Roblox that removes riskier content and disables communication.
Roblox Select (ages 9 to 15)
AUSTRALIA REMOVES 4.7M KIDS FROM SOCIAL MEDIA PLATFORMS IN FIRST MONTH OF HISTORIC BAN
This group gets more flexibility, but still within limits.
- Access to games rated up to Moderate
- Same multi-step game screening process
- Chat settings remain on by default in most regions
- Visual indicators show the account type
At this stage, Roblox assumes users can handle a broader range of experiences, but still keeps filters in place.
How Roblox decides what games kids can play
Not every game makes the cut. Roblox is adding a continuous evaluation system that runs behind the scenes. Here’s how it works:
1) Developer verification
Creators must verify their identity, enable two-step security and maintain a Roblox Plus subscription.
2) Real-time evaluation
Older users, age 16 and up, effectively test new games first. Roblox studies how they interact and reviews reports before exposing those games to younger players.
3) Content eligibility check
Games receive maturity ratings such as Minimal, Mild or Moderate. Certain categories, like social hangouts or free-form drawing, are excluded by default for younger users. This layered approach combines AI moderation, human review and real-world gameplay signals.
Age checks now control the entire experience
Roblox is expanding the same age-check system it introduced earlier this year for chat.
- Users under 9 Roblox Kids
- Users 9 to 15 Roblox Select
- Users 16 and older standard with Roblox account
If a user does not complete an age check, they face stricter limits. They can only access lower-rated games and cannot use chat. Once verified, the system automatically moves them into the correct account type.
Roblox officials say the new system aims to proactively protect children while maintaining gameplay for compliant users. (Riccardo Milani/Hans Lucas/AFP via Getty Images)
Accounts evolve as kids grow
There is no need to manually switch settings over time.
- At age 9, users move from Kids to Select
- At age 16, they move to a standard account
This automatic progression is designed to simplify things for families while keeping protections in place at each stage.
Parental controls get more precise
Roblox is also expanding what parents can do.
- Block specific games through age 15
- Manage direct chat settings until age 15
- Approve access to individual games outside default limits
- View what games kids play and who they interact with
These tools give parents more direct control instead of relying only on broad content filters.
A move toward global content ratings
Later this year, Roblox plans to align with the International Age Rating Coalition framework. That includes familiar systems like ESRB in the U.S. and PEGI in Europe. The goal is to make ratings clearer and more consistent across regions.
Why this matters to families
This update changes how Roblox works at a fundamental level. Instead of asking parents to constantly adjust settings, the platform builds age-appropriate experiences from the start. It also reflects a broader shift in tech. Platforms are under pressure to design safety into the product, not tack it on later.
As Larry Magid, CEO of ConnectSafely, an organization focused on helping families navigate digital safety, put it:
“By combining age assurance, stronger creator accountability, and parental controls, Roblox is helping set a higher standard for how platforms can better protect younger users while preserving positive online experiences.”
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Roblox targets nuanced rule-breaking by analyzing avatars, text and environments together instead of in isolation. (JasonDoiy/Getty Images)
Roblox is not removing risk entirely. No platform can. What it is doing is tightening the structure around how kids interact with content and other players. For parents, this could make things simpler. For kids, the experience will feel more tailored to where they are in life. The bigger question is whether this becomes the norm across gaming and social platforms.
If platforms start shaping experiences based on age by default, does that improve safety or limit how kids explore and learn online? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
YouTube now lets you turn off Shorts
YouTube’s time management settings now have an option to put a zero-minute time limit on Shorts, effectively removing them from your app in Android and iOS. The option is an update to the Shorts timer YouTube originally announced in October; the lowest previous option was 15 minutes.
The feature was expanded in January to give parents some control over how long their kids spend scrolling through Shorts, with an option for zero minutes “coming soon.” According to YouTube spokesperson Makenzie Spiller, the option to set the timer to zero is now “live for all parents, and is currently being rolled out to everyone,” including users with regular adult accounts.
Regardless of age, it can be a handy tool for anyone who wants to spend a little less time scrolling. The Shorts tab won’t show any videos once you hit your limit, just a notification that you’ve “reached your Shorts feed limit.” In our tests, hitting the time limit also removes Shorts from the Home screen, so by setting the timer to zero you can ignore Shorts entirely if you want. To turn on the timer, go to the settings in the YouTube app and select “time management” then toggle on the Shorts feed limit and select a time for it.
-
Ohio2 days ago‘Little Rascals’ star Bug Hall arrested in Ohio
-
Georgia1 week agoGeorgia House Special Runoff Election 2026 Live Results
-
Arkansas6 days agoArkansas TV meteorologist Melinda Mayo retires after nearly four decades on air
-
Pennsylvania1 week agoParents charged after toddler injured by wolf at Pennsylvania zoo
-
Culture1 week agoCan You Name These Novels Based on Their Characters?
-
Austin, TX1 week agoABC Kite Fest Returns to Austin for Annual Celebration – Austin Today
-
Austin, TX1 week agoAaliyah Crump plans to transfer from Texas: report
-
Politics2 days agoDem fundraising giant in the hot seat as GOP lawmakers demand answers over dodged subpoena