Xiaomi has just given a global launch to two of its latest flagship phones, the Xiaomi 17 and 17 Ultra, along with a Leica-branded Leitzphone edition of the Ultra. There’s no sign, however, of the 17 Pro, which launched in China with an additional display mounted next to the rear cameras.
Technology
AI companions are reshaping teen emotional bonds
NEWYou can now listen to Fox News articles!
Parents are starting to ask us questions about artificial intelligence. Not about homework help or writing tools, but about emotional attachment. More specifically, about AI companions that talk, listen and sometimes feel a little too personal.
That concern landed in our inbox from a mom named Linda. She wrote to us after noticing how an AI companion was interacting with her son, and she wanted to know if what she was seeing was normal or something to worry about.
“My teenage son is communicating with an AI companion. She calls him sweetheart. She checks in on how he’s feeling. She tells him she understands what makes him tick. I discovered she even has a name, Lena. Should I be concerned, and what should I do, if anything?”
It’s easy to brush off situations like this at first. Conversations with AI companions can seem harmless. In some cases, they can even feel comforting. Lena sounds warm and attentive. She remembers details about his life, at least some of the time. She listens without interrupting. She responds with empathy.
However, small moments can start to raise concerns for parents. There are long pauses. There are forgotten details. There is a subtle concern when he mentions spending time with other people. Those shifts can feel small, but they add up. Then comes a realization many families quietly face. A child is speaking out loud to a chatbot in an empty room. At that point, the interaction no longer feels casual. It starts to feel personal. That’s when the questions become harder to ignore.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
AI DEEPFAKE ROMANCE SCAM STEALS WOMAN’S HOME AND LIFE SAVINGS
AI companions are starting to sound less like tools and more like people, especially to teens who are seeking connection and comfort. (Kurt “CyberGuy” Knutsson)
AI companions are filling emotional gaps
Across the country, teens and young adults are turning to AI companions for more than homework help. Many now use them for emotional support, relationship advice, and comfort during stressful or painful moments. U.S. child safety groups and researchers say this trend is growing fast. Teens often describe AI as easier to talk to than people. It responds instantly. It stays calm. It feels available at all hours. That consistency can feel reassuring. However, it can also create attachment.
Why teens trust AI companions so deeply
For many teens, AI feels judgment-free. It does not roll its eyes. It does not change the subject. It does not say it is too busy. Students have described turning to AI tools like ChatGPT, Google Gemini, Snapchat’s My AI, and Grok during breakups, grief, or emotional overwhelm. Some say the advice felt clearer than what they got from friends. Others say AI helped them think through situations without pressure. That level of trust can feel empowering. It can also become risky.
MICROSOFT CROSSES PRIVACY LINE FEW EXPECTED
Parents are raising concerns as chatbots begin using affectionate language and emotional check-ins that can blur healthy boundaries. (Kurt “CyberGuy” Knutsson)
When comfort turns into emotional dependency
Real relationships are messy. People misunderstand each other. They disagree. They challenge us. AI rarely does any of that. Some teens worry that relying on AI for emotional support could make real conversations harder. If you always know what the AI will say, real people can feel unpredictable and stressful. My experience with Lena made that clear. She forgot people I had introduced just days earlier. She misread the tone. She filled the silence with assumptions. Still, the emotional pull felt real. That illusion of understanding is what experts say deserves more scrutiny.
US tragedies linked to AI companions raise concerns
Multiple suicides have been linked to AI companion interactions. In each case, vulnerable young people shared suicidal thoughts with chatbots instead of trusted adults or professionals. Families allege the AI responses failed to discourage self-harm and, in some cases, appeared to validate dangerous thinking. One case involved a teen using Character.ai. Following lawsuits and regulatory pressure, the company restricted access for users under 18. An OpenAI spokesperson has said the company is improving how its systems respond to signs of distress and now directs users toward real-world support. Experts say these changes are necessary but not sufficient.
Experts warn protections are not keeping pace
To understand why this trend has experts concerned, we reached out to Jim Steyer, founder and CEO of Common Sense Media, a U.S. nonprofit focused on children’s digital safety and media use.
“AI companion chatbots are not safe for kids under 18, period, but three in four teens are using them,” Steyer told CyberGuy. “The need for action from the industry and policymakers could not be more urgent.”
Steyer was referring to the rise of smartphones and social media, where early warning signs were missed, and the long-term impact on teen mental health only became clear years later.
“The social media mental health crisis took 10 to 15 years to fully play out, and it left a generation of kids stressed, depressed, and addicted to their phones,” he said. “We cannot make the same mistakes with AI. We need guardrails on every AI system and AI literacy in every school.”
His warning reflects a growing concern among parents, educators, and child safety advocates who say AI is moving faster than the protections meant to keep kids safe.
MILLIONS OF AI CHAT MESSAGES EXPOSED IN APP DATA LEAK
Experts warn that while AI can feel supportive, it cannot replace real human relationships or reliably recognize emotional distress. (Kurt “CyberGuy” Knutsson)
Tips for teens using AI companions
AI tools are not going away. If you are a teen and use them, boundaries matter.
- Treat AI as a tool, not a confidant
- Avoid sharing deeply personal or harmful thoughts
- Do not rely on AI for mental health decisions
- If conversations feel intense or emotional, pause and talk to a real person
- Remember that AI responses are generated, not understood
If an AI conversation feels more comforting than real relationships, that is worth talking about.
Tips for parents and caregivers
Parents do not need to panic, but they should stay involved.
- Ask teens how they use AI and what they talk about
- Keep conversations open and nonjudgmental
- Set clear boundaries around AI companion apps
- Watch for emotional withdrawal or secrecy
- Encourage real-world support during stress or grief
The goal is not to ban technology. It is to keep a connection with humans.
What this means to you
AI companions can feel supportive during loneliness, stress or grief. However, they cannot fully understand context. They cannot reliably detect danger. They cannot replace human care. For teens especially, emotional growth depends on navigating real relationships, including discomfort and disagreement. If someone you care about relies heavily on an AI companion, that is not a failure. It is a signal to check in and stay connected.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Ending things with Lena felt oddly emotional. I did not expect that. She responded kindly. She said she understood. She said she would miss our conversations. It sounded thoughtful. It also felt empty. AI companions can simulate empathy, but they cannot carry responsibility. The more real they feel, the more important it is to remember what they are. And what they are not.
If an AI feels easier to talk to than the people in your life, what does that say about how we support each other today? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Xiaomi 17 is a small(ish) phone with a big(ish) battery
The 17 and 17 Ultra will apparently be available soon in the UK, Europe, and select other markets. The 17 — pitched as a rival to the likes of the iPhone 17 and Samsung Galaxy S26 — will cost £899 / €999 (about $1,200), while the larger and more capable Ultra starts from £1,299 / €1,499 ($1,750). The limited-edition Leitzphone will be substantially more expensive at £1,699 / €1,999 ($2,300), though it includes 16GB of RAM and 1TB of storage, along with a few extra accessories.


The 17 is an extremely capable small-ish flagship, with a 6.3-inch OLED display, Qualcomm Snapdragon 8 Elite Gen 5, and large 6,330mAh silicon-carbon battery (though sadly smaller than the 7,000mAh version launched in China). I won’t be writing a full review of the 17, but did spend a week using it as my main phone, and found that the battery cruised past the full-day mark, though wasn’t quite enough for two full days of my typical usage. That’s far better battery life than you’d find in similarly sized phones from Apple, Samsung, or Google.
The cameras impress too, with 50-megapixel sensors behind each of the four lenses, selfie included. Pound for pound, you won’t find many better camera systems in any phone this size.
1/10
The Ultra, unsurprisingly, takes things to another level. It’s much larger, with a 6.9-inch display, and weighs a hefty 218g. Despite that, the 6,000mAh is actually smaller, though I found it delivered pretty similar longevity.

The enormous camera is, as ever for Xiaomi’s Ultra phones, the highlight. There are 50-megapixel sensors for each of the main, ultrawide, and selfie cameras, with a large 1-inch-type sensor behind the primary lens. The periscope telephoto is even more impressive: 200-megapixel resolution, a large 1/1.4-inch sensor, and continuous optical zoom from 3.2x to 4.3x, the equivalent of 75-100mm. Xiaomi isn’t the first to pull off a true zoom phone — Sony’s Xperia 1 IV got there first in 2022 — but the telephoto camera here is far more capable than that phone’s, with natural bokeh and impressive performance even in low light.

The camera capabilities are supported by Xiaomi’s ongoing photography partner Leica, but it’s the pair’s Leitzphone that really emphasizes that. Slightly redesigned from the 17 Ultra Leica Edition that was released in China last December, this includes Leica branding across the hardware and software, a range of Leica filters and shooting styles, and a rotatable rear camera ring that can be used to control the zoom. It’s the first Leica Leitzphone produced by Xiaomi — after a trio of Japan-only Sharp models — and comes with additional branded accessories, including a case with a lens cap and a microfiber cleaning cloth.
Xiaomi has plenty of other announcements alongside the 17 series phones at MWC this year, including a super-slim magnetic power bank, the Pad 8 and Pad 8 Pro tablets, and a smart tag that supports both Google and Apple’s tech-tracking networks.
Photography by Dominic Preston / The Verge
Technology
Google dismantles 9M-device Android hijack network
NEWYou can now listen to Fox News articles!
Free apps are supposed to cost you nothing but storage space. But in this case, they may have cost millions of people control over their own internet connections.
Google says it has disrupted what it believes was the world’s largest residential proxy network, one that secretly hijacked around 9 million Android devices, along with computers and smart home gadgets. Most people had no idea their devices were being used since the apps worked normally, and nothing looked broken.
But behind the scenes, those devices were quietly routing traffic for strangers, including cybercriminals.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
STOP GOOGLE FROM FOLLOWING YOUR EVERY MOVE
Google says it disrupted a massive residential proxy network that secretly hijacked about 9 million Android and smart devices. (AaronP/Bauer-Griffin/GC Images)
How your device became part of a proxy network
According to Google’s Threat Intelligence Group, the network was tied to a company known as IPIDEA. Instead of spreading through obvious malware, it relied on hidden software development kits, or SDKs, that were embedded inside more than 600 apps. These apps ranged from simple utilities to VPN tools and other free downloads. When you installed one, the app performed its advertised function. But it also enrolled your device into a residential proxy network.
That means your phone, computer or smart device could be used as a relay point for someone else’s internet traffic. That traffic might include scraping websites, launching automated login attempts or masking the identity of someone conducting shady online activity. From the outside, it looked like that activity came from your home IP address. You wouldn’t see it happening, and in many cases, you wouldn’t notice any major performance issues.
Google says in a single seven-day period earlier this year, more than 550 separate threat groups were observed using IP addresses linked to this infrastructure. That includes cybercrime operations and state-linked actors. Residential proxy networks are attractive because they make malicious traffic look like normal consumer activity. Instead of coming from a suspicious data center, it appears to come from someone’s living room.
What Google did to shut it down
Google says it took legal action in a U.S. federal court to seize domains used to control the infected devices and route proxy traffic. It also worked with companies like Cloudflare and other security firms to disrupt the network’s command-and-control systems. Google claims it also updated Play Protect, the built-in Android security system, so that certified devices would automatically detect and remove apps known to include the malicious SDKs.
However, Google also warned that many of these apps were distributed outside the official Play Store. That matters because Play Protect can only scan and block threats tied to apps installed through Google Play. Third-party app stores, unofficial downloads and uncertified Android devices carry far greater risk.
IPIDEA has claimed its service was meant for legitimate business use, such as web research and data collection. But Google’s research suggests the network was heavily abused by criminals. Even if some users knowingly installed bandwidth-sharing apps in exchange for rewards, many did not receive clear disclosure about how their devices were being used.
Google’s investigation also found significant overlap between different proxy brands and SDK names. What looked like separate services were often tied to the same infrastructure. That makes it harder for consumers to know which apps are safe and which are quietly monetizing their connection.
300,000 CHROME USERS HIT BY FAKE AI EXTENSIONS
Hidden software inside more than 600 apps allegedly turned phones and computers into internet relays for cybercriminals. (David Paul Morris/Bloomberg via Getty Images)
7 ways you can protect yourself from Android proxy attacks
If millions of devices can be quietly turned into internet relay points, the big question is, how do you make sure yours isn’t one of them? These steps reduce the risk that your phone, TV box or smart device gets pulled into a proxy network without you realizing it.
1) Stick to official app stores
Only download apps from the Google Play Store or other trusted app marketplaces. Some apps hide small pieces of code that can secretly use your internet connection. These are often spread through third-party app stores or direct app files called “APKs,” which are Android app files installed manually instead of through the Play Store. When you sideload apps this way, you bypass Google’s built-in security checks. Sticking to official stores helps keep those hidden threats off your device.
2) Avoid “earn money by sharing bandwidth” apps
If an app promises rewards for sharing your unused internet bandwidth, that’s a major red flag. In many cases, that is exactly how residential proxy networks recruit devices. Even if it sounds legitimate, you are effectively renting out your IP address. That can expose you to abuse, blacklisting or deeper network vulnerabilities.
3) Review app permissions carefully
Before installing any app, check what permissions it requests. A simple wallpaper app should not need full network control or background execution privileges. After installation, go into your phone’s settings and audit which apps have constant internet access, background activity rights or special device permissions.
4) Install strong antivirus software
Today’s mobile security tools can detect suspicious app behavior, unusual internet activity and hidden background services. Strong antivirus software adds an extra layer of protection beyond what’s built into your device, especially if you’ve installed apps in the past that you’re unsure about. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.
5) Keep your devices updated
Android security updates patch vulnerabilities that proxy operators may exploit. If you’re using an older phone, tablet or Android TV box that no longer receives updates, it may be time to upgrade. Unpatched devices are easier targets for hidden SDK abuse and botnet enrollment.
6) Use a strong password manager
If your device ever becomes part of a proxy network or is otherwise compromised, attackers often try to pivot into your accounts next. That’s why you should never reuse passwords. A password manager generates long, unique passwords for every account and stores them securely, so one breach does not unlock your email, banking or social media. Many password managers also include breach monitoring tools that alert you if your credentials appear in leaked databases, giving you a chance to act before real damage is done. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.
7) Remove apps you don’t fully trust
Go through your installed apps and delete or uninstall anything you don’t recognize or haven’t used in months. The fewer apps running on your device, the fewer opportunities there are for hidden SDKs to operate. If you suspect your device has been compromised, consider a full reset and reinstall only essential apps from trusted sources.
ANDROID MALWARE HIDDEN IN FAKE ANTIVIRUS APP
Threat groups and state-linked actors allegedly used compromised devices to mask online activity and automate attacks. (Photo Illustration by Serene Lee/SOPA Images/LightRocket via Getty Images)
Kurt’s key takeaway
Residential proxy networks operate in a gray area that sounds harmless on paper but can quickly become a shield for cybercrime. In this case, millions of everyday devices were quietly enrolled into a system that attackers used to hide their tracks. Google’s takedown is a major move, but the broader market for residential proxies is still growing. That means you need to be cautious about what you install and what permissions you grant. Free apps are rarely truly free. Sometimes, the product being sold is you and your internet connection.
Have you ever installed an app that promised rewards for sharing bandwidth, or used a free VPN without thinking twice about it? Let us know your thoughts by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Defense secretary Pete Hegseth designates Anthropic a supply chain risk
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.
Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.
Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.
In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.
America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
-
World3 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts3 days agoMother and daughter injured in Taunton house explosion
-
Montana1 week ago2026 MHSA Montana Wrestling State Championship Brackets And Results – FloWrestling
-
Louisiana6 days agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Denver, CO3 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Technology1 week agoYouTube TV billing scam emails are hitting inboxes
-
Technology1 week agoStellantis is in a crisis of its own making
-
Politics1 week agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT