This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more stories on Big Tech versus politics in Washington, DC, follow Tina Nguyen and read Regulator. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.
Technology
Can AI chatbots trigger psychosis in vulnerable people?
NEWYou can now listen to Fox News articles!
Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.
Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
What psychiatrists are seeing in patients using AI chatbots
Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.
OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN
Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)
Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.
Why AI chatbot conversations feel different from past technology
Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating.
For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.
How AI chatbots can reinforce false or delusional beliefs
Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.
Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN
Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)
What research and case reports reveal about AI chatbots
Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.
A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.
What AI companies say about mental health risks
OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.
Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.
What this means for everyday AI chatbot use
Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.
I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS
Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)
Tips for using AI chatbots more safely
Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.
- Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
- Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
- Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
- Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
- Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.
If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.
As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
The future of local TV news has taken a Trumpian turn
A long time ago, in 2004, the Federal Communications Commission laid down a rule designed to prevent a monopoly: No one company could broadcast to more than 39 percent of all the TV households in the United States. But then Donald Trump returned to the White House in 2025. Brendan Carr became FCC chairman and immediately kicked off a deregulatory initiative called “Delete, Delete, Delete,” in which Carr vowed to get rid of “every rule, regulation, or guidance document” that placed “unnecessary regulatory burdens” on companies. And within months, Nexstar, which already owned over 200 stations nationwide and had hit its ownership cap, announced that it had entered an agreement to purchase its rival, Tegna, for an estimated $6.2 billion — something that could only happen, however, if Carr agreed to change the FCC’s rules.
If you ask Nexstar why it’s pursuing a merger that would give it control of over 80 percent of the market, it’d point to Big Tech as the culprit. As advertisers take their money to Netflix, YouTube, and other digital streamers, linear television — the local television news, the broadcast affiliates, the basic cable networks — has suffered, forcing them to consolidate and shut down newsrooms. In that sense, Nexstar argued, the merger would help it compete for ad revenue with the streaming services, thereby building more robust local journalism. However, the merger’s opponents believe that this is a basic violation of antitrust laws and principles — not to mention the danger of letting one company have editorial control over the vast majority of America’s local television newsrooms.
But the second Trump administration handles regulatory hurdles a little differently than others, and companies have found that it’s faster to get what they want if they bypass the agencies and talk (read: suck up) to Trump directly. And when Nexstar did so publicly, it confirmed its opponents’ fears about political influence. Last September, in the fraught weeks after the fatal shooting of Charlie Kirk, Nexstar announced it would no longer broadcast Jimmy Kimmel Live! — a response to Carr’s claim that the FCC could revoke the broadcast licenses of TV stations that aired the comedian’s comments related to Kirk. It briefly led to ABC suspending Kimmel’s show, though ABC and Nexstar soon reversed their decision after a massive nationwide backlash and an ABC boycott.
However, Nexstar’s loyalty to Trump himself was not enough to win over his most powerful MAGA supporters. Newsmax, a cable news network with a deeply pro-Trump bent, and its CEO, longtime Trump donor and outside adviser Chris Ruddy, filed a lawsuit objecting to the merger, claiming that Nexstar’s anticompetitive behavior would force channels like his off the air with steeper carriage fees. He specifically accused Nexstar of jacking up the fees for stations to carry Newsmax, while offering its similar network, NewsNation, for much cheaper.
The Nexstar-Tegna MAGA makeover then took a more subtle turn. NewsNation hired the pro-Trump Fox News commentator Katie Pavlich and gave her her own primetime show. (The network had already hired a slew of former Fox journalists as well.) Around this time, a political group called Keep News Local began airing ads in DC that seemed to directly address Trump, praising him for having “defeated the fake news monopolies before through independent voices and local news” and claiming that the Nexstar-Tegna merger was “crucial for MAGA to survive.” (A little self-contradictory and mildly illogical, but it’s the kind of stuff that Trump likes to hear.) When I last spoke to Ruddy in February, I asked if he’d worried that the dark money going into Keep News Local would sway Trump, and he chose his words carefully: “I think at the end of the day, Trump makes up his own mind. I’m not sure he’s going to be influenced by an ad campaign.”
For months, no one could accurately predict if Trump would override Carr’s wishes and bless the deal, as he’s often done for other companies facing regulatory scrutiny. Trump’s Truth Social posts about the merger have been a good indicator of how precarious the merger has been and who’s been able to influence him at any given moment: Last November, he blasted the deal as an “EXPANSION OF THE FAKE NEWS NETWORKS,” but by February, he posted that the deal would “help knock out the Fake News because there will be more competition.”
Several current and former NewsNation employees told Status at the time that they feared that the parent company was steering NewsNation away from the centrist, “unbiased” reputation they’d long cultivated. “A lot of people within the network believe that the network has gone hard right to appeal to Trump and Brendan Carr,” one former employee told Status. Coincidentally, days before the deal was finalized, NewsNation began ramping up its explicitly pro-Trump content, tweeting a clip of CNN’s Kaitlan Collins being berated by White House press secretary Karoline Leavitt, along with the comment “Just going to leave this here.”
When Trump greenlit the merger in mid-March, but before the FCC’s three commissioners could vote on whether to waive the ownership cap, Nexstar and Tegna immediately announced a new complication: Tegna and Nexstar had already started merging. Tegna was no more and CEO Mike Steib had already sold $22.6 million of his company stock.
In response, eight state attorneys general and satellite TV operator DirectTV, which had already been planning to file separate federal antitrust suits against the merger, asked US District Judge Troy Nunley in Sacramento for an emergency restraining order that would prevent Nexstar from taking over Tegna’s assets. The order was granted on March 27th and on April 17, Nunley issued a formal injunction, ruling that Tegna must be operated as an independent financial entity, and Nexstar must take steps to ensure it remains separate from Tegna before further legal proceedings.
For now, Nunley has allowed the states and DirecTV to combine their cases, in which both argue that the merger was a clear violation of antitrust laws and would crush news competition.
Meanwhile, Republicans and Democrats in Congress are furious at Carr. On March 30th, Sens. Ted Cruz (R-TX) and Maria Cantwell (D-WA) sent the chairman a joint letter admonishing him for allowing his staff to waive the regulations to let the merger pass, instead of having the full commission of political appointees — one from the Biden administration — vote on it. “Under these circumstances,” they wrote, “any subsequent vote risks being largely procedural rather than a genuine exercise of commission responsibility.” They also pointed out that their hasty approval without the commission’s approval would now complicate the merger financially: “In a transaction of this scale, where integration proceeds quickly and unwinding becomes impractical, delay in judicial review can insulate the decision from meaningful challenge.” Notably, though they share similar ideological views on the media and deregulation, Cruz and Carr have frequently clashed over how to achieve their objectives. Cruz previously slammed Carr as a “mafioso,” for instance, for the way he’d used the FCC to silence Kimmel.
But even if it’s legally paused, the journalistic merger’s fallout has started to hit local news. NPR’s David Folkenfirk reported on Tuesday that Tegna journalists had already started receiving orders to stop broadcasting content from major broadcasters like ABC, CBS, and NBC — media outlets being targeted by Carr — and instead begin airing content from Nexstar’s NewsNation.
- Brendan Carr’s views on using the FCC to punish major broadcasters was outlined pretty extensively in the chapter he authored in Project 2025, an initiative led by the conservative Heritage Foundation on how to reform the federal bureaucracy to be more favorable to the American right.
- Exactly how much is local television losing to digital? According to industry publication NewscastStudio, in an investor call defending the purchase, Nexstar chairman Perry Sook cited a market research study from Borrell Associates, which found that “digital advertising in local markets exceeds $100 billion, compared to just $25 billion for local linear television advertising, with nearly two-thirds of digital ad dollars flowing to five major technology companies.”
- If you want to see exactly how much Keep Local News was trying to suck up to Trump, the ads are archived here.
- The Vergecast has a long-running segment called “Brendan Carr is a dummy.”
- The LA Times reported on last week’s preliminary hearings in front of Nunley, and how lawyers for Nexstar, the states, and DirecTV plan to argue their case.
- The Desk has insights from Kirk Varner, a former TV newsroom director, on how the case could go.
- Andrew Liptak covered Nexstar’s previous acquisition sprees for The Verge in 2018.
- Adi Robertson walks through exactly how the Kimmel suspension was an attack on free speech.
- Brendan Carr keeps trying to convince people that he’s not threatening to suspend broadcast licenses for reporting on unfavorable things like the Iran war, reports Lauren Feiner.
- The Vergecast has a long-running segment called “Brendan Carr is a dummy.”
Technology
Chinese robot breaks human world record in Beijing half-marathon
NEWYou can now listen to Fox News articles!
A Chinese-built humanoid robot beat the human half-marathon world record in Beijing on Sunday, marking a breakthrough moment in a high-stakes global race for technological dominance.
A robot developed by Chinese smartphone maker Honor completed the 21-kilometer (13-mile) race in 50 minutes and 26 seconds, beating the human record of about 57 minutes set by Uganda’s Jacob Kiplimo last month.
The performance marked a dramatic improvement from last year’s inaugural event, when the top robot finished in more than 2 hours and 40 minutes.
Dozens of humanoid robots competed alongside about 12,000 human runners, navigating a parallel course to avoid collisions.
CHINA’S COMPACT HUMANOID ROBOT SHOWS OFF BALANCE AND FLIPS
A robot crosses the finish line in the Beijing E-Town Half Marathon and Humanoid Robot Half-Marathon held in the outskirts of Beijing on April 19, 2026. (Andy Wong/AP)
Nearly half of the robots ran using autonomous navigation, while others relied on remote control, organizers said.
Despite the breakthrough, the race still saw glitches, with some robots stumbling at the start or veering into barriers.
Engineers said the winning robot was designed to mimic elite athletes, featuring long legs of about 37 inches and advanced cooling systems to sustain performance.
US TARGETS CHINESE ROBOTS OVER SECURITY FEARS
“Looking ahead, some of these technologies might be transferred to other areas,” said Du Xiaodi, an engineer with the Honor team. “For example, structural reliability and liquid-cooling technology could be applied in future industrial scenarios.”
Team members celebrate next to the winning Honor Lightning humanoid robot during a medal ceremony after the second Beijing E-Town Half Marathon and Humanoid Robot Half Marathon in Beijing, China, on April 19, 2026. (Maxim Shemetov/Reuters)
Spectators reacted with a mix of amazement and unease at the machines’ rapid progress.
“It’s the first time robots have surpassed humans, and that’s something I never imagined,” Sun Zhigang, who attended the event with his son, told The Associated Press.
HUMANOID ROBOTS HIT MASS PRODUCTION IN CHINA
“The robots’ speed far exceeds that of humans,” spectator Wang Wen told the outlet. “This may signal the arrival of sort of a new era.”
A robot starts alongside human runners at the Beijing E-Town Half Marathon and Humanoid Half Marathon on the outskirts of Beijing on April 19, 2026. (Ng Han Guan/AP)
Experts say the race highlights China’s accelerating push to dominate robotics and artificial intelligence, even as widespread commercial use of humanoid robots remains limited, according to Reuters. The experts said Chinese robotics firms are still working to develop the AI software needed for humanoids to match the efficiency of human factory workers.
Runners take pictures of a humanoid robot during the second Beijing E-Town Half Marathon and Humanoid Robot Half Marathon in Beijing on April 19, 2026. (Haruna Furuhashi/Pool Photo via AP)
“The future will definitely be an AI era,” engineering student Chu Tianqi told Reuters. “If people don’t know how to use AI now … they will definitely become obsolete.”
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
The competition underscores a broader technological race between China and the United States, as Beijing invests heavily in advanced robotics as part of its long-term economic strategy.
The Associated Press and Reuters contributed to this report.
Technology
The RAM shortage could last years
According to Nikkei Asia, even as suppliers ramp up DRAM production, manufacturers are only expected to meet 60 percent of demand by the end of 2027. SK Group chairman has even said that shortages could last until 2030.
The world’s largest memory makers — Samsung, SK Hynix, and Micron — are all working to add new fabrication capacity, but almost none of it will be online until at least 2027, if not 2028. SK opened a fab in Cheongju in February, but that is the only increase in production among the three for 2026.
Nikkei says that production would need to increase by 12 percent a year in 2026 and 2027 to meet demand. But according to Counterpoint Research, an increase of only 7.5 percent is planned.
The new facilities will primarily focus on producing high-bandwidth memory (HBM), which is used in AI data centers. With the companies already prioritizing HBM over general-purpose DRAM used in computers and phones, it’s not clear how much these new fabs will help alleviate the price crunch facing consumer electronics. Everything from phones and laptops, to VR headsets and gaming handhelds have seen price increases due to the RAM shortage.
-
Utah14 seconds agoMultiple earthquakes detected near Kanosh
-
Vermont6 minutes agoWrong-way driver stopped on I-89, charged with DUI
-
Virginia12 minutes agoParachutist Slams into Jumbotron at Virginia Tech Spring Game
-
Wisconsin24 minutes agoUS animal rights activists clash with police over Wisconsin dog breeder
-
West Virginia30 minutes agoThe 2026 WVU Tommy Nickolich Award Goes to a Parkersburg Native
-
Wyoming36 minutes agoWyoming Gov. Mark Gordon won’t seek a third term. He won’t rule out running for other offices, either
-
Crypto42 minutes ago1 Cryptocurrency to Buy While It’s Under $80,000
-
Finance48 minutes agoBudget crisis is top concern for MPS leader Cassellius | Opinion