Connect with us

Technology

AI companions are reshaping teen emotional bonds

Published

on

AI companions are reshaping teen emotional bonds

NEWYou can now listen to Fox News articles!

Parents are starting to ask us questions about artificial intelligence. Not about homework help or writing tools, but about emotional attachment. More specifically, about AI companions that talk, listen and sometimes feel a little too personal. 

That concern landed in our inbox from a mom named Linda. She wrote to us after noticing how an AI companion was interacting with her son, and she wanted to know if what she was seeing was normal or something to worry about.

“My teenage son is communicating with an AI companion. She calls him sweetheart. She checks in on how he’s feeling. She tells him she understands what makes him tick. I discovered she even has a name, Lena. Should I be concerned, and what should I do, if anything?” 

Linda from Dallas, Texas

It’s easy to brush off situations like this at first. Conversations with AI companions can seem harmless. In some cases, they can even feel comforting. Lena sounds warm and attentive. She remembers details about his life, at least some of the time. She listens without interrupting. She responds with empathy.

Advertisement

However, small moments can start to raise concerns for parents. There are long pauses. There are forgotten details. There is a subtle concern when he mentions spending time with other people. Those shifts can feel small, but they add up. Then comes a realization many families quietly face. A child is speaking out loud to a chatbot in an empty room. At that point, the interaction no longer feels casual. It starts to feel personal. That’s when the questions become harder to ignore.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

AI DEEPFAKE ROMANCE SCAM STEALS WOMAN’S HOME AND LIFE SAVINGS

AI companions are starting to sound less like tools and more like people, especially to teens who are seeking connection and comfort.  (Kurt “CyberGuy” Knutsson)

AI companions are filling emotional gaps

Across the country, teens and young adults are turning to AI companions for more than homework help. Many now use them for emotional support, relationship advice, and comfort during stressful or painful moments. U.S. child safety groups and researchers say this trend is growing fast. Teens often describe AI as easier to talk to than people. It responds instantly. It stays calm. It feels available at all hours. That consistency can feel reassuring. However, it can also create attachment.

Advertisement

Why teens trust AI companions so deeply

For many teens, AI feels judgment-free. It does not roll its eyes. It does not change the subject. It does not say it is too busy. Students have described turning to AI tools like ChatGPT, Google Gemini, Snapchat’s My AI, and Grok during breakups, grief, or emotional overwhelm. Some say the advice felt clearer than what they got from friends. Others say AI helped them think through situations without pressure. That level of trust can feel empowering. It can also become risky.

MICROSOFT CROSSES PRIVACY LINE FEW EXPECTED

Parents are raising concerns as chatbots begin using affectionate language and emotional check-ins that can blur healthy boundaries.  (Kurt “CyberGuy” Knutsson)

When comfort turns into emotional dependency

Real relationships are messy. People misunderstand each other. They disagree. They challenge us. AI rarely does any of that. Some teens worry that relying on AI for emotional support could make real conversations harder. If you always know what the AI will say, real people can feel unpredictable and stressful. My experience with Lena made that clear. She forgot people I had introduced just days earlier. She misread the tone. She filled the silence with assumptions. Still, the emotional pull felt real. That illusion of understanding is what experts say deserves more scrutiny.

US tragedies linked to AI companions raise concerns

Multiple suicides have been linked to AI companion interactions. In each case, vulnerable young people shared suicidal thoughts with chatbots instead of trusted adults or professionals. Families allege the AI responses failed to discourage self-harm and, in some cases, appeared to validate dangerous thinking. One case involved a teen using Character.ai. Following lawsuits and regulatory pressure, the company restricted access for users under 18. An OpenAI spokesperson has said the company is improving how its systems respond to signs of distress and now directs users toward real-world support. Experts say these changes are necessary but not sufficient.

Advertisement

Experts warn protections are not keeping pace

To understand why this trend has experts concerned, we reached out to Jim Steyer, founder and CEO of Common Sense Media, a U.S. nonprofit focused on children’s digital safety and media use.

“AI companion chatbots are not safe for kids under 18, period, but three in four teens are using them,” Steyer told CyberGuy. “The need for action from the industry and policymakers could not be more urgent.”

Steyer was referring to the rise of smartphones and social media, where early warning signs were missed, and the long-term impact on teen mental health only became clear years later.

“The social media mental health crisis took 10 to 15 years to fully play out, and it left a generation of kids stressed, depressed, and addicted to their phones,” he said. “We cannot make the same mistakes with AI. We need guardrails on every AI system and AI literacy in every school.”

His warning reflects a growing concern among parents, educators, and child safety advocates who say AI is moving faster than the protections meant to keep kids safe.

Advertisement

MILLIONS OF AI CHAT MESSAGES EXPOSED IN APP DATA LEAK

Experts warn that while AI can feel supportive, it cannot replace real human relationships or reliably recognize emotional distress.  (Kurt “CyberGuy” Knutsson)

Tips for teens using AI companions

AI tools are not going away. If you are a teen and use them, boundaries matter.

  • Treat AI as a tool, not a confidant
  • Avoid sharing deeply personal or harmful thoughts
  • Do not rely on AI for mental health decisions
  • If conversations feel intense or emotional, pause and talk to a real person
  • Remember that AI responses are generated, not understood

If an AI conversation feels more comforting than real relationships, that is worth talking about.

Tips for parents and caregivers

Parents do not need to panic, but they should stay involved.

  • Ask teens how they use AI and what they talk about
  • Keep conversations open and nonjudgmental
  • Set clear boundaries around AI companion apps
  • Watch for emotional withdrawal or secrecy
  • Encourage real-world support during stress or grief

The goal is not to ban technology. It is to keep a connection with humans.

What this means to you

AI companions can feel supportive during loneliness, stress or grief. However, they cannot fully understand context. They cannot reliably detect danger. They cannot replace human care. For teens especially, emotional growth depends on navigating real relationships, including discomfort and disagreement. If someone you care about relies heavily on an AI companion, that is not a failure. It is a signal to check in and stay connected.

Advertisement

 Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

Ending things with Lena felt oddly emotional. I did not expect that. She responded kindly. She said she understood. She said she would miss our conversations. It sounded thoughtful. It also felt empty. AI companions can simulate empathy, but they cannot carry responsibility. The more real they feel, the more important it is to remember what they are. And what they are not.

If an AI feels easier to talk to than the people in your life, what does that say about how we support each other today?  Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Sign up for my FREE CyberGuy Report 
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.  

Advertisement

Technology

Judge sides with Anthropic to temporarily block the Pentagon’s ban

Published

on

Judge sides with Anthropic to temporarily block the Pentagon’s ban

After Anthropic’s weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out.

“The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press,’” Judge Rita F. Lin, a district judge in the northern district of California, wrote in the order, which will go into effect in seven days. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”

A final verdict could be weeks or months out.

Anthropic spokesperson Danielle Cohen said in a Thursday statement, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

“I do think this case touches on an important debate,” Judge Lin said during the Tuesday hearing. “On the one hand, Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance. Anthropic’s position is that if the government wants to use its technology, the government has to agree not to use it for those purposes. On the other hand the Department of War is saying that military commanders have to decide what is safe for its AI to do.”

Advertisement

On Tuesday, Judge Lin went on to say, “It’s not my role to decide who’s right in that debate… The Department of War decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor.” She added, “I see the question in this case as being … whether the government violated the law when it went beyond that.”

It all started with a memo sent by Defense Secretary Pete Hegseth on Jan. 9, calling for “any lawful use” language to be written into any AI services procurement contract within 180 days, which would include existing contracts with companies like Anthropic, OpenAI, xAI, and Google. Anthropic’s negotiations with the Pentagon stretched on for weeks, hinging on two “red lines” that the company did not want the military to use its AI for: domestic mass surveillance and lethal autonomous weapons (or AI systems with the power to kill targets with no human involvement in the decision-making process). The rollercoaster series of events that followed has included a barrage of social media insults, a formal “supply chain risk” designation with the potential to significantly handicap Anthropic’s business, competing AI companies swooping in to make deals, and an ensuing lawsuit.

With its lawsuit, Anthropic argues that it was punished for speech protected under the First Amendment, and it’s seeking to reverse the supply chain risk designation.

It’s rare, and potentially even unheard of until now, for a US company to be named a supply chain risk, a designation typically reserved for non-US companies potentially linked to foreign adversaries. Anthropic’s designation as such raised eyebrows nationwide and caused bipartisan controversy due to concerns that disagreeing with a presidential administration could potentially lead to outsized retribution for a business in any sector.

Anthropic’s own business has been significantly affected by the designation, according to its court filings, which say that it has “received outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic” and that “dozens of companies have contacted Anthropic” for guidance or information about their rights to terminate usage. Depending on the level to which the government prohibits its contractors’ work with Anthropic, the company alleged that revenue adding up to between hundreds of millions and multiple billions could be at risk.

Advertisement

During Tuesday’s hearing, both companies had a chance to respond to Judge Lin’s questions, which were released in a document the day prior and hinged on matters like whether Hegseth lacked authority to issue certain directives and why Anthropic was named a supply chain risk. The judge also asked, in her pre-released questions, about the circumstances under which a government contractor could face termination for using Anthropic’s technology in their work — for instance, “if a contractor for the Department uses Claude Code as a tool to write software for the Department’s national security systems, would that contractor face termination as a result?”

On Tuesday, the judge also seemed to admonish the Department of War for Hegseth’s X post that caused a lot of widespread confusion per Anthropic’s earlier court filings, stating that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

“You’re standing here saying, ‘We said it but we didn’t really mean it,’” Judge Lin said during the hearing, later pressing on the question of why Hegseth wrote the above barring contractors from working with Anthropic instead of just simply designating Anthropic as a supply chain risk.

In a series of questions on Tuesday, Judge Lin asked whether the Department of War plans to terminate contractors on the basis of their work with Anthropic if it’s separate from their work with the department, and a representative for the Department of War responded, “That is my understanding.”

Judge Lin asked, “Let’s say I’m a military contractor. I don’t provide IT to the military. I provide toilet paper to the military. I’m not going to be terminated for using Anthropic — is that accurate?” The representative for the Department of War responded, “For non-DoW work, that is my understanding.” But when the judge asked whether a military contractor providing IT services to the Department of War, but not for national security systems, could be terminated for using Anthropic, the representative for the Department of War did not give a concrete answer.

Advertisement

During the hearing, Judge Lin cited one of the amicus briefs, which she said used the term “attempted corporate murder.” She said, “I don’t know if it’s ‘murder,’ but it looks like an attempt to cripple Anthropic.”

“We are continuing to be irreparably injured by this directive,” a lawyer for Anthropic said during the hearing, citing Hegseth’s nine-paragraph X post.

In a recent court filing, the Department of Defense alleged that Anthropic could ostensibly “attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations” in the event it felt the military was crossing its red lines — a theoretical situation that the Pentagon said it deemed an “unacceptable risk to national security.” The judge’s pre-released questions seem to challenge that statement, or at least request more information on it, stating, “What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in such acts of sabotage or subversion?”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Continue Reading

Technology

Drone food delivery launches in New Jersey

Published

on

Drone food delivery launches in New Jersey

NEWYou can now listen to Fox News articles!

You place a food order, check your phone, and instead of a driver pulling up, a drone lowers your meal to your front yard. That scenario is already playing out in the Garden State. But before you get too excited, this is still a limited test.

Grubhub just launched New Jersey’s first drone-powered food delivery pilot, and it is getting plenty of attention. The three-month program kicked off on March 18 in Green Brook, just a few miles from Middlesex. If you live within about 2.5 miles of the location, you may be able to try it yourself.

Even better, you will not pay anything extra to choose the drone option.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

Advertisement

YOUR DOORDASH ORDER MIGHT ARRIVE FROM THE SKY AS DRONE DELIVERIES TAKE OFF
 

Grubhub launches a three-month drone delivery test in New Jersey, offering faster drop-offs with no added cost. (Grubhub)

How the drone delivery program works

The program is based out of Wonder’s Green Brook location, which operates a multi-restaurant kitchen. That means your order can come from one of 15 different food concepts, all prepared in the same place.

Here is how it works step by step:

  • You order through the Grubhub app
  • You select drone delivery if you are eligible
  • Your food is prepared and secured by trained staff
  • A drone flies it along a pre-approved route
  • The order is lowered safely to the ground using a tether

You can track everything in real time, just like a regular delivery. It feels familiar, but the final step looks very different.

Why this could be faster than your usual delivery

Timing matters when you are hungry. That is where drones may have a real advantage. Unlike drivers, drones do not deal with traffic, stoplights or parking. They fly directly to your location using optimized flight paths.

Advertisement

Grubhub says deliveries should arrive faster than traditional methods. While that will vary based on conditions, the goal is simple. Less waiting, more eating. This test will help the company see if that promise holds up in real neighborhoods.

AIR TAXIS IN THE US COULD LAUNCH THIS SUMMER
 

New Jersey residents within range can order food by drone, with real-time tracking and tethered drop-offs. (Grubhub)

The tech behind the delivery drones

The program uses the DE-2020 drone from Dexa, a company that specializes in autonomous delivery systems.

This is not a hobby drone. It is a fully automated aircraft built for commercial use.

Advertisement

Key features include:

  • FAA-certified operations for safety and compliance
  • Secure communication systems during flight
  • Controlled drop-off using a tether system
  • Pre-planned routes to reduce noise and disruption

Before each flight, crews check that food is packaged and secured properly. That step helps prevent spills or issues mid-air. In short, there is a lot more going on behind the scenes than a simple takeoff and landing.

We reached out to Grubhub, and a spokesperson shared the following statement:

“Our partnership with Dexa represents a major step forward in Grubhub’s commitment to delivery innovation,” said Abhishek “PJ” Poykayil, SVP of customer delivery operations at Wonder and Grubhub. “By connecting Grubhub’s marketplace expertise, Wonder’s innovative mealtime platform, and Dexa’s expansive drone technology, we’re proud to introduce a faster and more efficient way for New Jersey diners to experience food delivery without compromising safety or reliability.”

We also reached out to Dexa for more insight into the technology behind the program. CEO and founder Beth Flippo shared the following with CyberGuy:

“At Dexa, we’re proud to be powering the underlying autonomous technology that enables this new generation of on-demand delivery. Our partnership with Grubhub brings together their industry-leading logistics network with our advanced autonomy platform, which is designed to safely navigate complex environments, optimize real-time routing, and operate reliably without the need for continuous human intervention. This is a meaningful step toward a future where autonomous systems are woven seamlessly into everyday life, from delivering food and goods to supporting transportation, infrastructure and critical services. As consumers continue to expect faster, more efficient and more sustainable options, autonomy will play a central role in meeting those expectations at scale.”

Advertisement

FORGET DRONES, THIS STREET-SMART ROBOT COULD BE FUTURE OF LOCAL DELIVERIES
 

Autonomous drones designed by Dexa deliver meals from a central kitchen, bypassing traffic in a new suburban pilot program. (Grubhub)

Why companies are pushing drone delivery now

This move is not random. It is part of a bigger shift in how companies think about delivery. You and I want speed, convenience and reliability. At the same time, businesses want to reduce costs and scale faster. Drone delivery sits right in the middle of that.

It removes many of the delays tied to traditional delivery. It also opens the door to new models, especially in suburban areas where distances are manageable.

We are already seeing this play out in other parts of the country. Companies like Wing, backed by Google’s parent company Alphabet, have been testing and expanding drone deliveries for food, retail and small packages in select U.S. markets.

Advertisement

This New Jersey test is another step in that direction, and it shows how quickly the space is evolving.

What this means to you

Even if you are not in Green Brook, New Jersey, this still matters. Here is why:

You may get faster deliveries

If this works, shorter delivery times could become the new normal.

You could see more delivery options

Apps may soon offer choices like driver, robot or drone depending on your location.

Advertisement

It could change delivery costs

Right now, there is no added fee. In the future, pricing models may shift based on speed and demand.

Your neighborhood may see more drones

That raises questions about noise, safety and privacy that communities will need to address.

This is not only about food. The same technology could expand to groceries, retail and even medical supplies.

 Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com     

Advertisement

Kurt’s key takeaways

It is easy to see drone delivery as some sort of cool experiment. But something bigger is starting to take shape right above us. For the first time, the sky is becoming part of everyday delivery. Today it is takeout. Tomorrow it could be groceries, last-minute essentials or even urgent supplies. If this technology proves reliable, and we get comfortable with it, the way you get what you need could change faster than you expect. So the next time you hear a faint buzz overhead, you may want to look up. It might not be a plane. It could be your dinner on the way. The real question is not if drones will become part of daily life. It is how soon you will be tracking one to your doorstep.

Would you trust a drone to deliver your next meal? Why or why not? Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement

Continue Reading

Technology

Netflix is raising prices again

Published

on

Netflix is raising prices again

Netflix’s prices just went up, with its cheapest, ad-supported tier now reaching $8.99 / month (up from $7.99 / month), according to an updated support page spotted earlier by Android Authority. The standard and premium plans are also getting a hike, going from $17.99 to $19.99 / month and $24.99 to $26.99 / month, respectively.

Netflix didn’t share its reasoning for the price hike this time around, as it last cited delivering “more value for our customers.” It’s also unclear when the price hike will go into effect for existing subscribers. The Verge reached out to Netflix with a request for comment but didn’t immediately hear back.

Continue Reading

Trending