Connect with us

Technology

Can AI chatbots trigger psychosis in vulnerable people?

Published

on

Can AI chatbots trigger psychosis in vulnerable people?

NEWYou can now listen to Fox News articles!

Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.

Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

What psychiatrists are seeing in patients using AI chatbots

Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.

Advertisement

OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN 

Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)

Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.

Why AI chatbot conversations feel different from past technology

Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating. 

For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.

Advertisement

How AI chatbots can reinforce false or delusional beliefs

Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.

Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.

OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)

What research and case reports reveal about AI chatbots

Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.

Advertisement

A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.

What AI companies say about mental health risks

OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.

Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.

What this means for everyday AI chatbot use

Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.

I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS

Advertisement

Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)

Tips for using AI chatbots more safely

Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.

  • Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
  • Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
  • Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
  • Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
  • Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.

If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Kurt’s key takeaways

AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.

As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

Published

on

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.

Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.

Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.

The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.

Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.

Advertisement

As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.

Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.

In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.

America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Advertisement
Continue Reading

Technology

What Trump’s ‘ratepayer protection pledge’ means for you

Published

on

What Trump’s ‘ratepayer protection pledge’ means for you

NEWYou can now listen to Fox News articles!

When you open a chatbot, stream a show or back up photos to the cloud, you are tapping into a vast network of data centers. These facilities power artificial intelligence, search engines and online services we use every day. Now there is a growing debate over who should pay for the electricity those data centers consume.

During President Trump’s State of the Union address this week, he introduced a new initiative called the “ratepayer protection pledge” to shift AI-driven electricity costs away from consumers. The core idea is simple. 

Tech companies that run energy-intensive AI data centers should cover the cost of the extra electricity they require rather than passing those costs on to everyday customers through higher utility rates.

It sounds simple. The hard part is what happens next.

Advertisement

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

At the State of the Union address Feb. 24, 2026, President Trump unveiled the “ratepayer protection pledge” aimed at shielding consumers from rising electricity costs tied to AI data centers. (Nathan Posner/Anadolu via Getty Images)

Why AI is driving a surge in electricity demand

AI systems require enormous computing power. That computing power requires enormous electricity. Today’s data centers can consume as much power as a small city. As AI tools expand across business, healthcare, finance and consumer apps, energy demand has risen sharply in certain regions.

Utilities have warned that the current grid in many parts of the country was not built for this level of concentrated demand. Upgrading substations, transmission lines and generation capacity costs money. Traditionally, those costs can influence rates paid by homes and small businesses. That is where the pledge comes in.

What the ratepayer protection pledge is designed to do

Under the ratepayer protection pledge, large technology companies would:

Advertisement
  • Cover the full cost of additional electricity tied to their data centers
  • Build their own on-site power generation to reduce strain on the public grid

Supporters say this approach separates residential energy costs from large-scale AI expansion. In other words, your household bill should not rise simply because a new AI data center opens nearby. So far, Anthropic is the clearest public backer. CyberGuy reached out to Anthropic for a comment on its role in the pledge. A company spokesperson referred us to a tweet from Anthropic Head of External Affairs Sarah Heck.

“American families shouldn’t pick up the tab for AI,” Heck wrote in a post on X. “In support of the White House ratepayer protection pledge, Anthropic has committed to covering 100% of electricity price increases that consumers face from our data centers.”

That makes Anthropic one of the first major AI companies to publicly state it will absorb consumer electricity price increases tied to its data center operations. Other major firms may be close behind. The White House reportedly plans to host Microsoft, Meta and Anthropic in early March to discuss formalizing a broader deal, though attendance and final terms have not been confirmed publicly.

Microsoft also expressed support for the initiative. 

“The ratepayer protection pledge is an important step,” Brad Smith, Microsoft vice chair and president, said in a statement to CyberGuy. “We appreciate the administration’s work to ensure that data centers don’t contribute to higher electricity prices for consumers.”  

Industry groups also point to companies such as Google and utilities including Duke Energy and Georgia Power as making consumer-focused commitments tied to data center growth. However, enforcement mechanisms and long-term regulatory details remain unclear.

Advertisement

CHINA VS SPACEX IN RACE FOR SPACE AI DATA CENTERS

The White House plans talks with Microsoft, Meta and Anthropic about shifting AI energy costs away from consumers. (Eli Hiller/For The Washington Post via Getty Images)

How this could change the economics of AI

AI infrastructure is already one of the most expensive technology buildouts in history. Companies are investing billions in chips, servers and real estate. If firms must also finance dedicated power plants or pay premium rates for grid upgrades, the cost of running AI systems increases further. That could lead to:

  • Slower expansion in some markets
  • Greater investment in renewable energy and storage
  • More partnerships between tech firms and utilities

Energy strategy may become just as important as computing strategy. For consumers, this shift signals that electricity is now a central part of the AI conversation. AI is no longer only about software. It is also about infrastructure.

The bigger consumer tech picture

AI is becoming embedded in smartphones, search engines, office software and home devices. As adoption grows, so does the hidden infrastructure supporting it. Energy is now part of the conversation around everyday technology. Every AI-generated image, voice command or cloud backup depends on a power-hungry network of servers.

By asking companies to account more directly for their electricity use, policymakers are acknowledging a new reality. The digital world runs on very physical resources. For you, that shift could mean more transparency. It also raises new questions about sustainability, local impact and long-term costs.

Advertisement

ARTIFICIAL INTELLIGENCE HELPS FUEL NEW ENERGY SOURCES

As AI expansion strains the grid, a new proposal would require tech firms to fund their own power needs. (Sameer Al-Doumy/AFP via Getty Images)

What this means for you

If you are a homeowner or renter, the practical question is simple. Will this protect my electric bill? In theory, separating data center energy costs from residential rates could reduce the risk of price spikes tied to AI growth. If companies fund their own generation or grid upgrades, utilities may have less reason to spread those costs among all customers.

That said, utility pricing is complex. It depends on state regulators, long-term planning and local energy markets.

Here is what you can watch for in your area:

Advertisement
  • New data center construction announcements
  • Utility filings that mention large commercial load growth
  • Public service commission decisions on rate adjustments

Even if you rarely use AI tools, your community could feel the effects of a nearby data center. The pledge is intended to keep those large-scale power demands from showing up in your monthly bill.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

The ratepayer protection pledge highlights an important turning point. AI is no longer only about innovation and speed. It is also about energy and accountability. If tech companies truly absorb the cost of their expanding power needs, households may avoid some of the financial strain tied to rapid AI growth. If not, utility bills could become an unexpected front line in the AI era.

As AI tools become part of daily life, how much extra power are you willing to support to keep them running? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com. All rights reserved.

Related Article

Scoop: Trump brings Big Tech to White House to curb power costs amid AI boom
Advertisement
Continue Reading

Technology

Here’s your first look at Kratos in Amazon’s God of War show

Published

on

Here’s your first look at Kratos in Amazon’s God of War show

Amazon has slowly been teasing out casting details for its live-action adaptation of God of War, and now we have our first look at the show. It’s a single image but a notable one showing protagonist Kratos and his son Atreus. The characters are played by Ryan Hurst and Callum Vinson, respectively, and they look relatively close to their video game counterparts.

There aren’t a lot of other details about the show just yet, but this is Amazon’s official description:

The God of War series storyline follows father and son Kratos and Atreus as they embark on a journey to spread the ashes of their wife and mother, Faye. Through their adventures, Kratos tries to teach his son to be a better god, while Atreus tries to teach his father how to be a better human.

That sounds a lot like the recent soft reboot of the franchise, which started with 2018’s God of War and continued through Ragnarök in 2022. For the Amazon series, Ronald D. Moore, best-known for his work on For All Mankind and Battlestar Galactica, will serve as showrunner. The rest of the cast includes: Mandy Patinkin (Odin), Ed Skrein (Baldur), Max Parker (Heimdall), Ólafur Darri Ólafsson (Thor), Teresa Palmer (Sif), Alastair Duncan (Mimir), Jeff Gulka (Sindri), and Danny Woodburn (Brok).

While production is underway on the God of War series, there’s no word on when it might start streaming.

Advertisement
Continue Reading

Trending