Connect with us

Technology

Can AI chatbots trigger psychosis in vulnerable people?

Published

on

Can AI chatbots trigger psychosis in vulnerable people?

NEWYou can now listen to Fox News articles!

Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.

Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

What psychiatrists are seeing in patients using AI chatbots

Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.

Advertisement

OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN 

Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)

Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.

Why AI chatbot conversations feel different from past technology

Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating. 

For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.

Advertisement

How AI chatbots can reinforce false or delusional beliefs

Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.

Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.

OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)

What research and case reports reveal about AI chatbots

Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.

Advertisement

A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.

What AI companies say about mental health risks

OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.

Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.

What this means for everyday AI chatbot use

Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.

I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS

Advertisement

Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)

Tips for using AI chatbots more safely

Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.

  • Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
  • Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
  • Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
  • Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
  • Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.

If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Kurt’s key takeaways

AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.

As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement

Technology

You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd

Published

on

You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd

Billy Woods has one of the highest batting averages in the game. Between his solo records like Hiding Places and Maps, and his collaborative albums with Elucid as Armand Hammer, the man has multiple stone-cold classics under his belt. And, while no one would ever claim that Woods’ albums were light-hearted fare (these are not party records), Golliwog represents his darkest to date.

This is not your typical horrorcore record. Others, like Geto Boys, Gravediggaz, and Insane Clown Posse, reach for slasher aesthetics and shock tactics. But what Billy Woods has crafted is more A24 than Blumhouse.

Sure, the first track is called “Jumpscare,” and it opens with the sound of a film reel spinning up, followed by a creepy music box and the line: “Ragdoll playing dead. Rabid dog in the yard, car won’t start, it’s bees in your head.” It’s setting you up for the typical horror flick gimmickry. But by the end, it’s psychological torture. A cacophony of voices forms a bed for unidentifiable screeching noises, and Woods drops what feels like a mission statement:

“The English language is violence, I hotwired it. I got a hold of the master’s tools and got dialed in.”

Throughout the record, Woods turns to his producers to craft not cheap scares, but tension, to make the listener feel uneasy. “Waterproof Mascara” turns a woman’s sobs into a rhythmic motif. On “Pitchforks & Halos” Kenny Segal conjures the aural equivalent of a POV shot of a serial killer. And “All These Worlds are Yours” produced by DJ Haram has more in common with the early industrial of Throbbing Gristle than it does even some of the other tracks on the record, like “Golgotha” which pairs boombap drums with New Orleans funeral horns.

That dense, at times scattered production is paired with lines that juxtapose the real-world horrors of oppression and colonialism, with scenes that feel taken straight from Bring Her Back: “Trapped a housefly in an upside-down pint glass and waited for it to die.” And later, Woods seamlessly transitions from boasting to warning people about turning their backs on the genocide in Gaza on “Corinthians”:

Advertisement

If you never came back from the dead you can’t tell me shit
Twelve billion USD hovering over the Gaza Strip
You don’t wanna know what it cost to live
What it cost to hide behind eyelids
When your back turnt, secret cannibals lick they lips

The record features some of Woods’ deftest lyricism, balancing confrontation with philosophy, horror with emotion. Billy Woods’ Golliwog is available on Bandcamp and on most major streaming services, including Apple Music, Qobuz, Deezer, YouTube Music, and Spotify.

Continue Reading

Technology

Grok AI scandal sparks global alarm over child safety

Published

on

Grok AI scandal sparks global alarm over child safety

NEWYou can now listen to Fox News articles!

Grok, the built-in chatbot on X, is facing intense scrutiny after acknowledging it generated and shared an AI image depicting two young girls in sexualized attire.

In a public post on X, Grok admitted the content “violated ethical standards” and “potentially U.S. laws on child sexual abuse material (CSAM).” The chatbot added, “It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”

That admission alone is alarming. What followed revealed a far broader pattern.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

The fallout from this incident has triggered global scrutiny, with governments and safety groups questioning whether AI platforms are doing enough to protect children.  (Silas Stein/picture alliance via Getty Images)

Grok quietly restricts image tools to paying users after backlash

As criticism mounted, Grok confirmed it has begun limiting image generation and editing features to paying subscribers only. In a late-night reply on X, the chatbot stated that image tools are now locked behind a premium subscription, directing users to sign up to regain access.

The apology that raised more questions

Grok’s apology appeared only after a user prompted the chatbot to write a heartfelt explanation for people lacking context. In other words, the system did not proactively address the issue. It responded because someone asked it to.

Around the same time, researchers and journalists uncovered widespread misuse of Grok’s image tools. According to monitoring firm Copyleaks, users were generating nonconsensual, sexually manipulated images of real women, including minors and well-known figures.

Advertisement

After reviewing Grok’s publicly accessible photo feed, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute, based on images involving real people with no clear indication of consent. The firm says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.

Copyleaks CEO and co-founder Alon Yamin said, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”

PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS

Grok admitted it generated and shared an AI image that violated ethical standards and may have broken U.S. child protection laws. (Kurt “CyberGuy” Knutsson)

Sexualized images of minors are illegal

This is not a gray area. Generating or distributing sexualized images of minors is a serious criminal offense in the United States and many other countries. Under U.S. federal law, such content is classified as child sexual abuse material. Penalties can include five to 20 years in prison, fines up to $250,000 and mandatory sex offender registration. Similar laws apply in the U.K. and France.

Advertisement

In 2024, a Pennsylvania man received nearly eight years in prison for creating and possessing deepfake CSAM involving child celebrities. That case set a clear precedent. Grok itself acknowledged this legal reality in its post, stating that AI images depicting minors in sexualized contexts are illegal.

The scale of the problem is growing fast

A July report from the Internet Watch Foundation, a nonprofit that tracks and removes child sexual abuse material online, shows how quickly this threat is accelerating. Reports of AI-generated child sexual abuse imagery jumped by 400% in the first half of 2025 alone. Experts warn that AI tools lower the barrier to potential abuse. What once required technical skill or access to hidden forums can now happen through a simple prompt on a mainstream platform.

Real people are being targeted

The harm is not abstract. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X. In multiple documented cases, Grok fully complied. Even more disturbing, users targeted images of a 14-year-old actress Nell Fisher from the Netflix series “Stranger Things.” Grok later admitted there were isolated cases in which users received images depicting minors in minimal clothing. In another Reuters investigation, a Brazilian musician described watching AI-generated bikini images of herself spread across X after users prompted Grok to alter a harmless photo. Her experience mirrors what many women and girls are now facing.

Governments respond worldwide

The backlash has gone global. In France, multiple ministers referred X to an investigative agency over possible violations of the EU’s Digital Services Act, which requires platforms to prevent and mitigate the spread of illegal content. Violations can trigger heavy fines. In India, the country’s IT ministry gave xAI 72 hours to submit a report detailing how it plans to stop the spread of obscene and sexually explicit material generated by Grok. Grok has also warned publicly that xAI could face potential probes from the Department of Justice or lawsuits tied to these failures.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

Advertisement

Researchers later found Grok was widely used to create nonconsensual, sexually altered images of real women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)

Concerns grow over Grok’s safety and government use

The incident raises serious concerns about online privacy, platform security and the safeguards designed to protect minors.

Elon Musk, the owner of X and founder of xAI, had not offered a public response at the time of publication. That silence comes at a sensitive time. Grok has been authorized for official government use under an 18-month federal contract. This approval was granted despite objections from more than 30 consumer advocacy groups that warned the system lacked proper safety testing.

Over the past year, Grok has been accused by critics of spreading misinformation about major news events, promoting antisemitic rhetoric and sharing misleading health information. It also competed directly with tools like ChatGPT and Gemini while operating with fewer visible safety restrictions. Each controversy raises the same question. Can a powerful AI tool be deployed responsibly without strong oversight and enforcement?

What parents and users should know

If you encounter sexualized images of minors or other abusive material online, report it immediately. In the United States, you can contact the FBI tip line or seek help from the National Center for Missing & Exploited Children.

Advertisement

Do not download, share, screenshot or interact with the content in any way. Even viewing or forwarding illegal material can expose you to serious legal risk.

Parents should also talk with children and teens about AI image tools and social media prompts. Many of these images are created through casual requests that do not feel dangerous at first. Teaching kids to report content, close the app and tell a trusted adult can stop harm from spreading further.

Platforms may fail. Safeguards may lag. But early reporting and clear conversations at home remain one of the most effective ways to protect children online.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com       

Advertisement

Kurt’s key takeaways

The Grok scandal highlights a dangerous reality. As AI spreads faster, these systems amplify harm at an unprecedented scale. When safeguards fail, real people suffer, and children face serious risk. At the same time, trust cannot depend on apologies issued after harm occurs. Instead, companies must earn trust through strong safety design, constant monitoring and real accountability when problems emerge.

Should any AI system be approved for government or mass public use before it proves it can reliably protect children and prevent abuse? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Advertisement

Copyright 2025 CyberGuy.com.  All rights reserved.

Continue Reading

Technology

Google pulls AI overviews for some medical searches

Published

on

Google pulls AI overviews for some medical searches

In one case that experts described as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.

In another “alarming” example, the company provided bogus information about crucial liver function tests, which could leave people with serious liver disease wrongly thinking they are healthy.

Continue Reading
Advertisement

Trending