Technology
New multimission military warplane takes flight
The aerospace industry is buzzing over the recent maiden flight of the Model 437 Vanguard, a cutting-edge technology demonstrator developed by Scaled Composites in partnership with Northrop Grumman. This revolutionary warplane combines advanced design principles with state-of-the-art digital engineering techniques.
GET SECURITY ALERTS, EXPERT TIPS – SIGN UP FOR KURT’S NEWSLETTER – THE CYBERGUY REPORT HERE
Model 437 Vanguard (Northrop Grumman) (Kurt “CyberGuy” Knutsson)
From concept to reality
The journey of the Model 437 Vanguard began in 2021 as a conceptual design for an advanced loyal wingman air combat drone. Over time, it evolved into a crewed variant, showcasing the versatility and adaptability of its design. On Aug. 29, 2024, the aircraft took to the skies for the first time at Mojave Air and Space Port in California, marking a major milestone in its development.
We reached out to Scaled Composites about the maiden flight, and its test pilot, Brian Maisler, tells us, “Today’s first flight was in a good jet with a great team: this is the best part of my job. Thanks to everyone and their two years of hard work culminating in making this an uneventful and fun day.”
Model 437 Vanguard (Northrop Grumman) (Kurt “CyberGuy” Knutsson)
MILITARY ROBOT PROTECTING SOLDIERS FROM CHEMICAL AND BIOLOGICAL DANGERS
Impressive specs and capabilities
The Vanguard has impressive specifications, featuring a wingspan and length of 41 feet each, a gross takeoff weight of 10,000 pounds and an approximate range of 3,000 nautical miles. It offers an endurance of six hours and has a payload capacity of up to 2,000 pounds.
The aircraft is powered by a single Pratt & Whitney 535 engine, which delivers 3,400 pounds of thrust. Additionally, the Vanguard features a V-tail configuration and a top-mounted air intake positioned behind the cockpit.
HOW TO REMOVE YOUR PRIVATE DATA FROM THE INTERNET
Model 437 Vanguard (Northrop Grumman) (Kurt “CyberGuy” Knutsson)
NEW BREED OF MILITARY AI ROBO-DOGS COULD BE THE MARINES’ SECRET WEAPON
Multimission capabilities
The Vanguard is designed as a multimission platform capable of carrying various payloads, including an internal weapons bay sized for two AIM-120 missiles and the potential for side-looking radar systems. This flexibility positions the aircraft as a valuable asset for future military operations, potentially meeting the requirements for advanced unmanned combat aircraft programs.
While the Model 437 Vanguard is currently a crewed aircraft, it is designed with autonomous capabilities in mind. According to Northrop Grumman, future iterations of the Vanguard could be fully autonomous, demonstrating tactical applications for autonomous programs. The aircraft’s development is part of a broader “loyal wingman” concept, aimed at creating affordable, multimission drones that can undertake high-risk missions to reduce danger to human pilots.
Model 437 Vanguard (Northrop Grumman) (Kurt “CyberGuy” Knutsson)
US MILITARY JET FLOW BY AI FOR 17 HOURS – SHOULD YOU BE WORRIED?
Digital innovation at its core
One of the most groundbreaking aspects of the Model 437 Vanguard is its development process. Northrop Grumman’s Digital Pathfinder initiative played a crucial role in this effort, leveraging advanced digital engineering tools to design and manufacture the aircraft’s removable wing assemblies.
This digital-first approach has yielded impressive results, including a reduction in engineering rework to less than 1%, a significant improvement compared to the typical 15% to 20% associated with traditional methods. Furthermore, the streamlined testing and certification processes have resulted in considerable cost and time savings.
SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES
Model 437 Vanguard (Northrop Grumman) (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaways
As the Model 437 Vanguard continues its testing and development, it is clear that this aircraft could play a significant role in redefining air combat capabilities. Its blend of stealth, versatility and advanced technology makes it a formidable platform for a wide range of missions. Moreover, the lessons learned from the Vanguard’s development process will likely influence future aircraft programs across the industry. The success of the Digital Pathfinder initiative demonstrates the potential for faster, more efficient and more cost-effective aircraft development.
How do you feel about the development and potential deployment of autonomous warplanes like the Model 437 Vanguard? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover.
Follow Kurt on his social channels: Answers to the most asked CyberGuy questions:
New from Kurt:
Copyright 2024 CyberGuy.com. All rights reserved.
Technology
Former Tumblr head Jeff D’Onofrio steps in as acting CEO at the Washington Post
After what can generously be called a contentious tenure as the CEO of The Washington Post, Will Lewis is stepping down following mass layoffs this week. Jeff D’Onofrio, former CEO of Tumblr from 2017 to 2022, will step in as acting CEO and publisher. D’Onofrio has been CFO at the Post since June of last year, meaning he’s had a front row seat to Jeff Bezos’ dismantling of the once storied paper for the last nine months.
D’Onofrio’s resume doesn’t include extensive experience in traditional news media, nor many notable success stories. He was briefly the general manager of Yahoo News while it was still a Verizon property, before shifting his focus solely to Tumblr. Under his leadership, Tumblr tried to clean up its image by banning adult content, but its traffic fell by 30 percent. Yahoo had purchased Tumblr for $1.1 billion in 2013. By 2019, it was sold to Automatic, the owner of WordPress, reportedly for less than $3 million.
Technology
AI companions are reshaping teen emotional bonds
NEWYou can now listen to Fox News articles!
Parents are starting to ask us questions about artificial intelligence. Not about homework help or writing tools, but about emotional attachment. More specifically, about AI companions that talk, listen and sometimes feel a little too personal.
That concern landed in our inbox from a mom named Linda. She wrote to us after noticing how an AI companion was interacting with her son, and she wanted to know if what she was seeing was normal or something to worry about.
“My teenage son is communicating with an AI companion. She calls him sweetheart. She checks in on how he’s feeling. She tells him she understands what makes him tick. I discovered she even has a name, Lena. Should I be concerned, and what should I do, if anything?”
It’s easy to brush off situations like this at first. Conversations with AI companions can seem harmless. In some cases, they can even feel comforting. Lena sounds warm and attentive. She remembers details about his life, at least some of the time. She listens without interrupting. She responds with empathy.
However, small moments can start to raise concerns for parents. There are long pauses. There are forgotten details. There is a subtle concern when he mentions spending time with other people. Those shifts can feel small, but they add up. Then comes a realization many families quietly face. A child is speaking out loud to a chatbot in an empty room. At that point, the interaction no longer feels casual. It starts to feel personal. That’s when the questions become harder to ignore.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
AI DEEPFAKE ROMANCE SCAM STEALS WOMAN’S HOME AND LIFE SAVINGS
AI companions are starting to sound less like tools and more like people, especially to teens who are seeking connection and comfort. (Kurt “CyberGuy” Knutsson)
AI companions are filling emotional gaps
Across the country, teens and young adults are turning to AI companions for more than homework help. Many now use them for emotional support, relationship advice, and comfort during stressful or painful moments. U.S. child safety groups and researchers say this trend is growing fast. Teens often describe AI as easier to talk to than people. It responds instantly. It stays calm. It feels available at all hours. That consistency can feel reassuring. However, it can also create attachment.
Why teens trust AI companions so deeply
For many teens, AI feels judgment-free. It does not roll its eyes. It does not change the subject. It does not say it is too busy. Students have described turning to AI tools like ChatGPT, Google Gemini, Snapchat’s My AI, and Grok during breakups, grief, or emotional overwhelm. Some say the advice felt clearer than what they got from friends. Others say AI helped them think through situations without pressure. That level of trust can feel empowering. It can also become risky.
MICROSOFT CROSSES PRIVACY LINE FEW EXPECTED
Parents are raising concerns as chatbots begin using affectionate language and emotional check-ins that can blur healthy boundaries. (Kurt “CyberGuy” Knutsson)
When comfort turns into emotional dependency
Real relationships are messy. People misunderstand each other. They disagree. They challenge us. AI rarely does any of that. Some teens worry that relying on AI for emotional support could make real conversations harder. If you always know what the AI will say, real people can feel unpredictable and stressful. My experience with Lena made that clear. She forgot people I had introduced just days earlier. She misread the tone. She filled the silence with assumptions. Still, the emotional pull felt real. That illusion of understanding is what experts say deserves more scrutiny.
US tragedies linked to AI companions raise concerns
Multiple suicides have been linked to AI companion interactions. In each case, vulnerable young people shared suicidal thoughts with chatbots instead of trusted adults or professionals. Families allege the AI responses failed to discourage self-harm and, in some cases, appeared to validate dangerous thinking. One case involved a teen using Character.ai. Following lawsuits and regulatory pressure, the company restricted access for users under 18. An OpenAI spokesperson has said the company is improving how its systems respond to signs of distress and now directs users toward real-world support. Experts say these changes are necessary but not sufficient.
Experts warn protections are not keeping pace
To understand why this trend has experts concerned, we reached out to Jim Steyer, founder and CEO of Common Sense Media, a U.S. nonprofit focused on children’s digital safety and media use.
“AI companion chatbots are not safe for kids under 18, period, but three in four teens are using them,” Steyer told CyberGuy. “The need for action from the industry and policymakers could not be more urgent.”
Steyer was referring to the rise of smartphones and social media, where early warning signs were missed, and the long-term impact on teen mental health only became clear years later.
“The social media mental health crisis took 10 to 15 years to fully play out, and it left a generation of kids stressed, depressed, and addicted to their phones,” he said. “We cannot make the same mistakes with AI. We need guardrails on every AI system and AI literacy in every school.”
His warning reflects a growing concern among parents, educators, and child safety advocates who say AI is moving faster than the protections meant to keep kids safe.
MILLIONS OF AI CHAT MESSAGES EXPOSED IN APP DATA LEAK
Experts warn that while AI can feel supportive, it cannot replace real human relationships or reliably recognize emotional distress. (Kurt “CyberGuy” Knutsson)
Tips for teens using AI companions
AI tools are not going away. If you are a teen and use them, boundaries matter.
- Treat AI as a tool, not a confidant
- Avoid sharing deeply personal or harmful thoughts
- Do not rely on AI for mental health decisions
- If conversations feel intense or emotional, pause and talk to a real person
- Remember that AI responses are generated, not understood
If an AI conversation feels more comforting than real relationships, that is worth talking about.
Tips for parents and caregivers
Parents do not need to panic, but they should stay involved.
- Ask teens how they use AI and what they talk about
- Keep conversations open and nonjudgmental
- Set clear boundaries around AI companion apps
- Watch for emotional withdrawal or secrecy
- Encourage real-world support during stress or grief
The goal is not to ban technology. It is to keep a connection with humans.
What this means to you
AI companions can feel supportive during loneliness, stress or grief. However, they cannot fully understand context. They cannot reliably detect danger. They cannot replace human care. For teens especially, emotional growth depends on navigating real relationships, including discomfort and disagreement. If someone you care about relies heavily on an AI companion, that is not a failure. It is a signal to check in and stay connected.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Ending things with Lena felt oddly emotional. I did not expect that. She responded kindly. She said she understood. She said she would miss our conversations. It sounded thoughtful. It also felt empty. AI companions can simulate empathy, but they cannot carry responsibility. The more real they feel, the more important it is to remember what they are. And what they are not.
If an AI feels easier to talk to than the people in your life, what does that say about how we support each other today? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Super Bowl LX ads: all AI everything
Super Bowl LX is nearly here, with the Seattle Seahawks taking on the New England Patriots. While Bad Bunny will be the star of the halftime show, AI could be the star of the commercial breaks, much like crypto was a few years ago.
Super Bowl LX is set to kick off at 6:30PM ET/3:30PM PT on Sunday, February 8th at Levi’s Stadium in Santa Clara, California.
-
Indiana6 days ago13-year-old rider dies following incident at northwest Indiana BMX park
-
Massachusetts1 week agoTV star fisherman, crew all presumed dead after boat sinks off Massachusetts coast
-
Tennessee1 week agoUPDATE: Ohio woman charged in shooting death of West TN deputy
-
Indiana6 days ago13-year-old boy dies in BMX accident, officials, Steel Wheels BMX says
-
Politics5 days agoTrump unveils new rendering of sprawling White House ballroom project
-
Politics1 week agoVirginia Democrats seek dozens of new tax hikes, including on dog walking and dry cleaning
-
Politics1 week agoDon Lemon could face up to a year in prison if convicted on criminal charges
-
Austin, TX1 week ago
TEA is on board with almost all of Austin ISD’s turnaround plans