Technology
Can AI chatbots trigger psychosis in vulnerable people?
NEWYou can now listen to Fox News articles!
Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.
Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
What psychiatrists are seeing in patients using AI chatbots
Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.
OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN
Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)
Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.
Why AI chatbot conversations feel different from past technology
Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating.
For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.
How AI chatbots can reinforce false or delusional beliefs
Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.
Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN
Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)
What research and case reports reveal about AI chatbots
Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.
A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.
What AI companies say about mental health risks
OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.
Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.
What this means for everyday AI chatbot use
Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.
I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS
Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)
Tips for using AI chatbots more safely
Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.
- Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
- Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
- Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
- Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
- Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.
If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.
As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Dyson’s powerful 360 Vis Nav robovac is down to $279.99 for a limited time
If you’re tired of running your vacuum multiple times just to get the dirt and debris out of the carpets in your living room, Dyson’s 360 Vis Nav is worth a look. It’s one of the more powerful robot vacuums currently available, and now through May 11th (or while supplies last), it’s on sale at Woot for an all-time low of $279.99 ($919 off) with a full two-year warranty.
The last-gen 360 Vis Nav offers a whopping 65 air watts of suction, allowing it to pull dirt, dust, and pet hair from carpets impressively well. In her brief time testing the robovac, my colleague Jennifer Pattison Tuohy said the Dyson “demolished a pile of dry oatmeal in seconds,” adding that she briefly worried it might even suck up the tassels on her large rug (it didn’t). By comparison, many robot vacuums — including Dyson’s new $1,200 Spot + Scrub AI — require multiple passes to fully eradicate the same kind of mess on your floor.
What’s more, the robovac’s small, D-shaped design and the location of its ultra-fluffy brush allow it to dig into edges and corners more effectively than many of the more roundish robot vacuums, while its lower profile lets it easily get under most beds and sofas. The roomy 500ml dustbin also means you likely won’t need to empty it too often, while Dyson’s built-in handle and terrific quick-release button make removing said bin a relatively simple task when it’s time to do so.
While it is undeniably powerful, it’s worth noting that the 360 Vis Nav lacks a few features found on some of its more modern rivals. Although its navigation worked well enough during our testing, it lacks AI-powered obstacle avoidance and doesn’t come with a self-emptying dock. Battery life is also relatively short at around 65 minutes per charge. Nonetheless, if your top priority is quickly removing dust, dirt, and pet hair from carpets without multiple passes, the Dyson remains an option worth considering, especially at this discounted price.
Technology
Global scam crackdown leads to 276 arrests
NEWYou can now listen to Fox News articles!
We’ve often warned you about romance scams and crypto “investment” opportunities that feel too good to pass up. Now, there’s a major update that shows just how organized these operations have become.
The Department of Justice and Federal Bureau of Investigation announced a sweeping international operation that led to at least 276 arrests and the shutdown of multiple scam centers tied to cryptocurrency fraud. These networks targeted Americans and drained millions of dollars from victims.
The operation spanned continents and involved coordinated efforts by law enforcement and tech companies.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
TOP 5 SCAMS SPREADING RIGHT NOW
The Department of Justice and FBI say international scam networks used romance and fake crypto investment schemes to steal millions from victims. (Helena Dolderer/Picture Alliance)
How the cryptocurrency scam crackdown unfolded
Authorities worked with partners around the world, including the Dubai Police and law enforcement agencies in Thailand and beyond. Together, they dismantled at least nine scam centers linked to large-scale crypto fraud.
Several suspects now face federal charges in the United States, including wire fraud and money laundering. Investigators say these operations functioned like businesses, with recruitment, management layers and structured systems designed to deceive victims.
Officials made it clear that this effort sends a message. Fraud crosses borders, and enforcement is now doing the same.
How crypto investment scams target victims
These schemes often follow a pattern known as “pig-butchering.” It is a slow, calculated tactic that builds trust before any money is involved.
A scammer may reach out through social media or a messaging app and start a casual conversation. Over time, that interaction turns more personal. In some cases, it feels like a real relationship. Once trust is established, the topic shifts toward investing, often framed as a unique crypto opportunity.
Victims are guided through setting up accounts and transferring funds to platforms that appear legitimate. The dashboards may even show fake gains to build confidence. At that point, control of the money is already gone. Funds are quickly moved through multiple accounts and eventually end up with the scammers.
Many victims are encouraged to keep going, sometimes borrowing money or taking out loans to invest more. By the time the truth becomes clear, the losses can be devastating.
How Meta Platforms, Inc. helped track scam networks
Meta Platforms, Inc. played a key role in the investigation by providing data that helped law enforcement identify and track these networks.
The company says it has taken aggressive action across its platforms. In 2025 alone, Meta removed more than 159 million scam ads and shut down 10.9 million accounts linked to scam centers. More recently, it disabled over 150,000 accounts connected to these networks as part of a coordinated enforcement effort.
“Meta is committed to combatting online fraud and scams, and we are proud to partner with law enforcement in these efforts,” Chris Sonderby, Meta’s vice president and deputy general counsel, said. “We applaud the DOJ and FBI for their leadership in holding criminal scammers accountable and protecting American consumers.”
FROM FRIENDLY TEXT TO FINANCIAL TRAP: THE NEW SCAM TREND
Federal authorities announced a sweeping international crackdown that led to at least 276 arrests tied to cryptocurrency scam centers targeting Americans. (Kurt “CyberGuy” Knutsson)
New tools to stop cryptocurrency scams in real time
Meta is also rolling out new protections across its apps to help users spot scams before they get pulled in.
On Facebook, users may see alerts tied to suspicious friend requests, especially when an account shows unusual behavior such as limited connections or inconsistent location details.
On WhatsApp, new warnings are designed to prevent scammers from linking their own devices to someone else’s account, giving users a chance to pause before approving a risky request.
Messenger is also expanding its scam detection tools. When a conversation shows patterns linked to common fraud tactics, users may receive prompts that explain the risk and suggest actions like blocking or reporting the account.
Why this cryptocurrency scam crackdown matters to you
This operation highlights how organized these scam networks have become. These are not random messages from a single person. They are coordinated groups running structured operations designed to build trust, create urgency and move money quickly.
Even with hundreds of arrests, the threat remains. New networks continue to emerge, often using the same playbook with slight changes. That means staying informed is still one of the most effective ways to protect yourself.
Ways to stay safe from cryptocurrency scams
Scammers follow familiar patterns, which means there are clear warning signs you can watch for and simple steps you can take to protect yourself.
1) Slow down unexpected connections
If someone you do not know reaches out and quickly builds a personal connection, slow things down and question the situation. Scammers rely on momentum, so taking a pause can help you spot inconsistencies.
2) Verify investment platforms before sending money
Before sending money to any investment platform, take time to verify that it is legitimate. A professional-looking website or app does not guarantee it is real. Look for independent reviews and official registration details.
3) Avoid sending crypto to unknown sources
Avoid sending cryptocurrency to individuals or platforms you cannot confirm. Once those transactions go through, they are extremely difficult to recover.
4) Watch for pressure and urgency
Be aware of pressure. If someone pushes you to act quickly or invest more, that urgency is often a warning sign.
5) Use strong antivirus protection
Strong antivirus software can help block malicious links, fake investment sites and other threats before they reach you, adding another layer of defense against scam attempts. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.
THE ONE THING SCAMMERS CHECK BEFORE TARGETING YOU ONLINE
Meta said it removed more than 159 million scam ads in 2025 and helped investigators track networks tied to cryptocurrency fraud. (Halfpoint/Getty Images)
6) Limit your personal data exposure
Scammers often rely on publicly available information to build trust. Reducing how much of your personal data appears online by using a data removal service can make it harder for them to target you in the first place. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting CyberGuy.com.
7) Strengthen your account security
It also helps to strengthen your digital security. Enable two-factor authentication (2FA) on your accounts and use trusted security tools to reduce exposure to malicious links and messages.
8) Report scams as soon as possible
If you believe you have been targeted or defrauded, report it to the FBI’s Internet Crime Complaint Center at ic3.gov as soon as possible.
Kurt’s key takeaways
This global crackdown is a meaningful step forward. It shows what can happen when law enforcement, tech companies and international partners work together. At the same time, these scams are not going away. The tactics will continue to evolve, and new networks will take the place of those that were shut down. Awareness and caution remain your strongest defenses.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
We report a lot about scams but not so much about scammers getting caught. Does this make you feel like real progress is being made in stopping them? Let us know by writing to us at CyberGuy.com.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Asus chases Elgato with its own secondary touchscreen display
Asus’s latest gaming monitor is a little smaller than usual. The ROG Strix XG129C, announced on Friday, is a 12.3-inch touchscreen IPS display that’s intended to be a sidekick for a larger main monitor, similar to the 14.1-inch secondary display in the 2020 Asus ROG Zephyrus Duo 15. It’s a slightly smaller competitor to Corsair’s Xeneon Edge, which has a 14.5-inch display, but the same 720p resolution.
Asus says the XG129C covers 125 percent of the sRGB color gamut and 90 percent of the DCI-P3 color gamut. It also comes with a one-year subscription for the hardware monitoring tool AIDA64 Extreme, which would usually cost $65. Besides acting as a performance monitor for your PC, sidekick displays like this can also be handy as an extension for streaming or editing setups, much like Elgato’s Stream Deck.
Along with the little XG129C, Asus also announced the ROG Strix OLED XG34WCDMS, a 34-inch RGB Tandem QD-OLED gaming monitor. It features a 280Hz refresh rate and a 3440 x 1440p resolution, and, according to Asus, covers 99 percent of the DCI-P3 color gamut. Asus has not yet officially announced pricing for either display.
-
California6 minutes agoTwo GOP candidates for California governor participate in Bakersfield forum
-
Colorado12 minutes agoColorado man sentenced to over 40 years in prison for murder of ex-girlfriend
-
Connecticut17 minutes agoBody recovered from Connecticut River near Chester-Lyme Ferry, DEEP says
-
Delaware24 minutes agoFormer Delaware police officer accused of raping woman he met on dating app
-
Florida30 minutes ago
Florida man taken into custody related to call threatening business
-
Georgia36 minutes ago
Leschber Named to 2026 ACC All-Tournament Team
-
Hawaii42 minutes agoFlames engulf van on H-1 Freeway near Punchbowl
-
Idaho47 minutes agoDay use state park fees waived for Idaho residents on July 4 to celebrate America250