Technology
Can AI chatbots trigger psychosis in vulnerable people?
NEWYou can now listen to Fox News articles!
Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.
Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
What psychiatrists are seeing in patients using AI chatbots
Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.
OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN
Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)
Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.
Why AI chatbot conversations feel different from past technology
Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating.
For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.
How AI chatbots can reinforce false or delusional beliefs
Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.
Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN
Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)
What research and case reports reveal about AI chatbots
Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.
A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.
What AI companies say about mental health risks
OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.
Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.
What this means for everyday AI chatbot use
Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.
I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS
Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)
Tips for using AI chatbots more safely
Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.
- Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
- Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
- Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
- Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
- Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.
If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.
As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Malicious Mac extensions steal crypto wallets and passwords
NEWYou can now listen to Fox News articles!
Mac users often assume they’re safer than everyone else, especially when they stick to official app stores and trusted tools.
That sense of security is exactly what attackers like to exploit. Security researchers have now uncovered a fresh wave of malicious Mac extensions that don’t just spy on you, but can also steal cryptocurrency wallet data, passwords and even Keychain credentials. What makes this campaign especially concerning is where the malware was found, inside legitimate extension marketplaces that many people trust by default.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Once active, GlassWorm targets passwords, crypto wallets, and even your macOS Keychain without obvious warning signs. (Cyberguy.com)
How malicious Mac extensions slipped into trusted stores
Security researchers at Koi Security uncovered a new wave of the GlassWorm malware hiding inside extensions for code editors like Visual Studio Code (via Bleeping Computer). If you’re not familiar with code editors, they’re tools developers use to write and edit code, similar to how you might use Google Docs or Microsoft Word to edit text. These malicious extensions appeared on both the Microsoft Visual Studio Marketplace and OpenVSX, platforms widely used by developers and power users.
FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE
At first glance, the extensions looked harmless. They promised popular features like code formatting, themes or productivity tools. Once installed, though, they quietly ran malicious code in the background. Earlier versions of GlassWorm relied on hidden text tricks to stay invisible. The latest wave goes further by encrypting its malicious code and delaying execution, making it harder for automated security checks to catch.
Even though this campaign is described as targeting developers, you don’t need to write code to be at risk. If you use a Mac, install extensions or store passwords or cryptocurrency on your system, this threat still applies to you.
What GlassWorm does once it’s on your Mac
Once active, GlassWorm goes after some of the most sensitive data on your device. It attempts to steal login credentials tied to platforms like GitHub and npm, but it doesn’t stop there. The malware also targets browser-based cryptocurrency wallets and now tries to access your macOS Keychain, where many saved passwords are stored.
Researchers also found that GlassWorm checks whether hardware wallet apps like Ledger Live or Trezor Suite are installed. If they are, the malware attempts to replace them with a compromised version designed to steal crypto. That part of the attack isn’t fully working yet, but the functionality is already in place.
To maintain access, the malware sets itself up to run automatically after a reboot. It can also allow remote access to your system and route internet traffic through your Mac without you realizing it, turning your device into a quiet relay for someone else.
Some of the malicious extensions showed tens of thousands of downloads. Those numbers can be manipulated, but they still create a false sense of trust that makes people more likely to install them.
7 steps you can take to stay safe from malicious Mac extensions
Malicious extensions don’t look dangerous. That’s what makes them effective. These steps can help you reduce the risk, even when threats slip into trusted marketplaces.
1) Only install extensions you actually need
Every extension you install increases risk. If you’re not actively using one, remove it. Be especially cautious of extensions that promise big productivity gains, premium features for free or imitate popular tools with slightly altered names.
2) Verify the publisher before installing anything
Check who made the extension. Established developers usually have a clear website, documentation and update history. New publishers, vague descriptions or cloned names should raise red flags.
These malicious extensions looked like helpful tools but quietly ran hidden code once installed. (Cyberguy.com)
3) Use a password manager
A password manager keeps your logins encrypted and stored safely outside your browser or editor. It also ensures every account has a unique password, so if one set of credentials is stolen, attackers can’t reuse it elsewhere.
Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.
HOW HACKERS ARE BREAKING INTO APPLE DEVICES THROUGH AIRPLAY
4) Run strong antivirus software on your Mac
Modern macOS malware doesn’t always drop obvious files. Antivirus tools today focus on behavior, looking for suspicious background activity, encrypted payloads and persistence mechanisms used by malicious extensions. This adds a critical safety net when something slips through official marketplaces.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.
5) Consider a personal data removal service
When your data leaks, it often spreads across data broker sites and breaches databases. Personal data removal services help reduce how much of your information is publicly available, making it harder for attackers to target you with follow-up scams or account takeovers.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
6) Turn on two-factor authentication (2FA)
Enable 2FA wherever possible, especially for email, cloud services, developer platforms and crypto-related accounts. Even if a password is stolen, 2FA can stop attackers from logging in.
7) Keep macOS and your apps fully updated
Security updates close gaps that malware relies on. Turn on automatic updates so you’re protected even if you miss the headlines or forget to check manually.
Mac users often trust official app stores, but that trust is exactly what attackers are counting on. (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaway
GlassWorm shows that malware doesn’t always come from shady downloads or obvious scams. Sometimes it hides inside tools you already trust. Even official extension stores can host malicious software long enough to cause real harm. If you use a Mac and rely on extensions, a quick review of what’s installed could save you from losing passwords, crypto or access to important accounts.
When was the last time you checked the extensions running on your Mac? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
BMW says electric M3 will be a ‘new level’ of performance
BMW teased its forthcoming all-electric M-series performance sedan today, promising that the quad-motor M3 sports car would feature specs that are truly next level when it arrives in 2027.
The M3 will have four electric motors and simulated gear shifting, a feature that is quickly becoming a must-have for electrified sports cars. BMW says the setup unlocks the benefits of both rear and all-wheel drive, with the ability to decouple the front axle.
The electric M3 will also be built on BMW’s Neue Klasse platform that promises more efficient batteries, lightning fast charging, and higher powered computers. The architecture will be 800-volt, the regenerative braking will be highly efficient, and if the camouflaged pictures are any indication, it will be a real looker on the streets.
Speaking of computers, the M3 will have four of them, unified under its oddly named “Heart of Joy” component that aggregates all the traction, stability, and electric motor management functions of the vehicle. That means when software updates are made available, the vehicle’s brain will be able to receive them over-the-air faster than BMW’s current processors.
The M3’s simulated gear shifting will feature a “newly developed soundscape” that “channels pure emotion.” Like other automakers, BMW is loath to alienate its loyal M-series customers by giving them all the torque but none of the gearing feedback. And now a fake “soundscape” will accompany all that shifting. Porsche, Hyundai, and Dodge are also on board the fake EV gear shifting bandwagon.
Technology
FCC cracks down on robocall reporting violations
NEWYou can now listen to Fox News articles!
If you are tired of scam calls slipping through the cracks, federal regulators just took a meaningful step. The Federal Communications Commission finalized new penalties aimed at telecom companies that submit false, inaccurate or late information to a key anti-robocall system. The changes go into effect Feb. 5. They strengthen oversight of the Robocall Mitigation Database, which plays a central role in tracking spoofed calls and holding providers accountable.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
What changed and why it matters
Under the new rules, voice service providers must recertify every year that their filings in the Robocall Mitigation Database are accurate and current. The FCC will now back that requirement with real financial consequences.
The FCC is cracking down on robocalls by tightening rules that govern how telecom providers verify and report call traffic. (iStock)
FCC SCRUBS OWN REFERENCE TO ‘INDEPENDENT’ AGENCY FROM WEBSITE AFTER DEM’S TESTY EXCHANGE WITH CHAIRMAN
Here is what the commission approved:
- $10,000 fines for submitting false or inaccurate information
- $1,000 fines for each database entry not updated within 10 business days
- Annual recertification of all provider filings
- The FCC also adopted a $100 filing fee for initial Robocall Mitigation Database submissions and for required annual recertifications.
- Two-factor authentication to protect database access
- A $100 application fee for initial filings and annual recertifications
The FCC also made clear that these violations are considered ongoing until corrected, meaning fines can accrue on a daily basis rather than being treated as one-time penalties.
According to the FCC, many past submissions failed basic standards. Some lacked accurate contact details. Others included robocall mitigation plans that did not describe any real mitigation practices at all.
How the Robocall Mitigation Database works
The Robocall Mitigation Database requires providers to verify and certify the identities of callers that use their networks. Regulators and law enforcement rely on it to trace spoofed calls and illegal robocall campaigns. That task is harder than it sounds. America’s telecom system is vast and fragmented. Calls often pass through multiple networks owned by major carriers like Verizon and AT&T, as well as smaller regional providers and VoIP services. When calls hop between networks, verification can be missed or ignored. For years, the FCC did not closely verify or enforce the accuracy of these filings. That gap raised serious concerns.
Under the updated rules, providers that fail to recertify or correct deficient filings can be referred to enforcement and removed from the database, which can prevent other carriers from carrying their calls at all.
Why inaccurate robocall data hurts consumers
When robocall filings are wrong or outdated, scam calls are more likely to reach your phone. Providers may treat a call as trusted even when it should raise red flags. That gives robocallers more time to operate and makes it harder for regulators to shut them down quickly. The FCC says stronger penalties and tighter oversight are meant to close that gap before consumers pay the price.
New FCC penalties target inaccurate robocall filings that have allowed scam calls to slip through carrier networks. (Kurt “CyberGuy” Knutsson)
Pushback and pressure on the FCC
When the FCC proposed penalties, it asked whether violations should be treated as minor paperwork mistakes or as serious misrepresentations. Telecom trade groups pushed back. They argued that fines should not apply unless providers first get a chance to fix errors or unless the FCC proves the filings were willfully inaccurate.
State attorneys general and the robocall monitoring platform ZipDX urged a tougher stance. They warned that false filings undermine every effort to stop illegal robocalls. The FCC ultimately chose a middle path. It rejected treating violations as harmless paperwork errors. At the same time, it stopped short of imposing the maximum penalties allowed by law.
What this means to you
For everyday consumers, this move matters more than it may seem. Accurate robocall reporting makes it easier to trace scam calls, shut down bad actors and prevent spoofed numbers from reaching your phone. Stronger penalties give telecoms a reason to take these filings seriously instead of treating them as routine compliance chores.
11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025
The FCC also set a firm annual deadline. Providers must recertify their robocall mitigation filings each year by March 1, creating a predictable enforcement checkpoint. While this will not end robocalls overnight, it tightens a weak link that scammers have exploited for years.
Simple steps you can take right now to reduce robocalls
Even with tougher FCC enforcement, scam calls will not disappear overnight. Here are a few smart steps you can take today to reduce your risk.
- Do not answer unknown calls. If it is important, a legitimate caller will leave a voicemail.
- Never press buttons or say yes to robocall prompts. That confirms your number is active and can trigger more scam calls.
- Report scam calls to your carrier. Most major carriers let you report robocalls directly through their call log or app.
- Register your number with the National Do Not Call Registry at donotcall.gov/. It will not stop scammers, but it can reduce legitimate telemarketing calls.
- Block repeat offenders. If the same number keeps calling, block it so your phone stops ringing altogether.
- Be cautious with callback numbers. Scammers often spoof local area codes to look familiar.
The FCC says accurate robocall reporting by telecoms helps carriers identify and shut down scam traffic faster, but consumer habits still matter.
Pro tip: remove your personal data at the source
Robocalls do not come out of nowhere. Many start with your personal information being sold or shared by data brokers. These companies collect phone numbers, addresses, emails and even family details from public records, apps, purchases and online activity. Scammers and shady marketers buy that data to build call lists. Removing your data from data broker sites can reduce the number of robocalls you receive over time. You can try to do this manually by finding individual data broker websites and submitting removal requests one by one. The process is time-consuming and often needs to be repeated.
Some people choose to use a data removal service to automate this process and continuously monitor for re-posting. That can help limit how often your phone number circulates among marketers and scammers. Less exposed data means fewer opportunities for robocallers to target you. Cutting off robocalls often starts long before your phone rings.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
By strengthening oversight and accountability, the FCC aims to shut down illegal robocalls before they ever reach your phone. (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaways
Robocalls thrive when accountability breaks down. By adding meaningful fines, stronger security, annual recertification and filing fees, the FCC is signaling that accuracy is no longer optional. Because penalties can continue to build until problems are fixed, telecoms now face real consequences for ignoring or delaying corrections. This rule forces providers to own their role in stopping illegal calls instead of passing the blame along the network chain. Real progress will depend on enforcement, but this is one of the clearest signs yet that regulators are closing gaps scammers rely on.
Do you think stricter penalties will finally push telecoms to take robocall prevention seriously, or will scammers just find the next loophole? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
-
Montana4 days agoService door of Crans-Montana bar where 40 died in fire was locked from inside, owner says
-
Technology1 week agoPower bank feature creep is out of control
-
Delaware6 days agoMERR responds to dead humpback whale washed up near Bethany Beach
-
Dallas, TX6 days agoAnti-ICE protest outside Dallas City Hall follows deadly shooting in Minneapolis
-
Dallas, TX1 week agoDefensive coordinator candidates who could improve Cowboys’ brutal secondary in 2026
-
Education1 week agoVideo: This Organizer Reclaims Counter Space
-
Virginia4 days agoVirginia Tech gains commitment from ACC transfer QB
-
Iowa1 week agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star