Technology
AI-powered scams target kids while parents stay silent
NEWYou can now listen to Fox News articles!
Kids are spending more time online than ever, and that early exposure is opening the door to a new kind of danger.
Artificial intelligence has supercharged online scams, creating personalized and convincing traps that even adults can fall for. The latest Bitwarden “Cybersecurity Awareness Month 2025” poll shows that while parents know these risks exist, most still haven’t had a serious talk with their children about them.
This growing communication gap is leaving the youngest internet users vulnerable at a time when online safety depends more than ever on education and oversight.
Young children face real risks online
Children as young as preschool age are now part of the connected world, yet few truly understand how to stay safe. The Bitwarden survey found that 42% of parents with children between 3 and 5 years old said their child had accidentally shared personal information online.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
5 PHONE SAFETY TIPS EVERY PARENT SHOULD KNOW
AI-powered scams are finding new ways to reach kids who go online earlier than ever. (Kurt “CyberGuy” Knutsson)
Nearly 80% of kids between the ages of 3 and 12 already have their own tablet or another connected device. Many parents assume supervision software or family settings are enough, but that assumption breaks down when kids explore apps, games and chat spaces designed to hold their attention. Device access has become nearly universal by early elementary school, but meaningful supervision and honest safety conversations are lagging behind.
The AI threat and the parental disconnect
Artificial intelligence has changed the nature of online scams by making them sound familiar, personal and hard to recognize. Bitwarden’s data shows that 78% of parents worry their child could fall for an AI-enhanced threat, such as a voice-cloned message or a fake chat with a friend. Despite that fear, almost half of those same parents haven’t talked with their kids about what an AI-powered scam might look like. The disconnect is even stronger among Gen Z parents.
About 80% of them say they are afraid their child will fall victim to an AI-based scheme, yet 37% allow their kids full or nearly full autonomy online. In those households, problems are more common. Malware infections, unauthorized in-app purchases and phishing attempts appear at the highest rates among families who worry the most but monitor the least. The paradox is clear. Parents recognize the threat but fail to translate awareness into consistent action.
Why parents haven’t had the talk
There are many reasons this important talk keeps getting delayed. Some parents simply feel unprepared to explain AI, while others assume their existing safety tools will protect their children. Only 17% of parents in the United States actively seek information about AI technologies, according to related research by Barna Group. That leaves a large majority relying on partial knowledge or outdated advice.
Many parents also juggle multiple devices at home, making it difficult to track every app or game their child uses. Some overestimate how safe their own habits are, even though they admit to reusing passwords or skipping security updates. Without firsthand understanding or personal discipline, it becomes even harder to teach those lessons to children. As a result, many kids face the internet with curiosity but without proper guidance.
Smart ways to protect your child online
The Bitwarden findings make one thing clear: kids are getting connected younger, and scams powered by artificial intelligence are already targeting them. The good news is that parents can take practical steps right now to reduce those risks and build lasting online safety habits.
1) Keep devices where you can see them
Set up tablets, laptops and gaming consoles in shared family areas rather than bedrooms. When screens stay visible, you naturally become part of your child’s online world. This not only encourages open conversation but also helps spot suspicious messages, fake friend requests or scam links before they cause trouble.
Staying involved in your child’s digital life is the best defense against today’s AI threats. (Kurt “CyberGuy” Knutsson)
2) Use built-in parental controls
Most devices have strong tools you can activate in minutes. Apple’s Screen Time and Google Family Link let you limit screen time, approve new app installs and monitor how long your child spends on specific apps. These controls are especially useful for younger kids who, according to the Bitwarden poll, often have little supervision despite heavy device use.
TEENS TURNING TO AI FOR LOVE AND COMFORT
3) Talk through every download
Before your child installs a new game or app, take a moment to check it together. Read the reviews, look at what data it collects and confirm the developer’s name. Explain why some games or “free” apps might ask for camera or contact access they don’t need. This kind of shared review teaches healthy skepticism and helps children recognize red flags later on.
4) Make password strength and 2FA a family rule
AI scams thrive on weak or reused passwords. Use a password manager to create and store strong, unique logins for each account. Turn on two-factor authentication (2FA) wherever possible so that even if a password is stolen, the account stays protected. Let your kids see how you use these tools so they learn that security isn’t complicated, it’s just a habit.
Many parents delay important online safety talks because they feel unprepared to explain AI, leaving kids curious but without the guidance they need to stay safe. (Kurt “CyberGuy” Knutsson)
Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
5) Teach them to stop and tell
One of the best defenses is simple: encourage your child to pause and talk before reacting to anything unusual online. Whether it’s a pop-up claiming a prize, a strange link in a chat or a voice message that sounds familiar, remind them it’s always okay to ask you first. Quick conversations like these can prevent costly mistakes and turn learning moments into trust-building ones.
6) Keep devices updated and use strong antivirus software
Outdated software can leave gaps that scammers exploit. Regularly update operating systems, browsers and apps to close those holes. Add strong antivirus software. Explain to your child that updates and scans keep their favorite games and videos running safely, not just their parents happy.
The best way to safeguard from malicious links that install malware, potentially accessing private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.
7) Make online safety part of everyday life
Don’t save these conversations for when something goes wrong. Bring them up casually during family time or when watching YouTube or gaming together. Treat digital safety like any other life skill, something practiced daily and improved with time. The more normal it feels, the more confident your child becomes when facing online risks.
Talking about online safety early helps build trust and awareness before trouble starts. (Kurt “CyberGuy” Knutsson)
What this means for you
If you are a parent, guardian or anyone helping a child use technology, this issue deserves your attention. Start talking early, even before your child begins exploring the web on their own. Teach them simple concepts like asking before clicking or sharing. Instead of relying only on parental controls, have ongoing conversations that help them recognize suspicious links, messages or pop-ups. Show them that cybersecurity isn’t about fear but about awareness. Model strong digital habits at home by using unique passwords and turning on two-factor authentication. Explain why those steps matter. When your child understands the reasoning behind the rules, they are more likely to follow them. Make technology part of your family routine rather than a private space your child navigates alone. Regularly check the apps they use and the people they interact with. Set clear expectations and age-appropriate boundaries that can grow with your child’s experience. Staying engaged is the most powerful protection you can offer.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
The numbers from Bitwarden show a clear warning sign. Concern among parents is high, yet actual conversations about AI-powered scams remain rare. That silence gives scammers the upper hand. Children who learn about online safety early are more confident, more cautious and better equipped to handle unexpected messages or fake alerts. It only takes a few minutes of honest conversation to create awareness that lasts for years. By taking action now, you can close the gap between fear and understanding, protecting your family in a digital world that changes every day.
Are you ready to start the conversation that could keep your child from becoming the next target of an AI-powered scam? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Defense secretary Pete Hegseth designates Anthropic a supply chain risk
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.
Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.
Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.
In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.
America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
Technology
What Trump’s ‘ratepayer protection pledge’ means for you
NEWYou can now listen to Fox News articles!
When you open a chatbot, stream a show or back up photos to the cloud, you are tapping into a vast network of data centers. These facilities power artificial intelligence, search engines and online services we use every day. Now there is a growing debate over who should pay for the electricity those data centers consume.
During President Trump’s State of the Union address this week, he introduced a new initiative called the “ratepayer protection pledge” to shift AI-driven electricity costs away from consumers. The core idea is simple.
Tech companies that run energy-intensive AI data centers should cover the cost of the extra electricity they require rather than passing those costs on to everyday customers through higher utility rates.
It sounds simple. The hard part is what happens next.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
At the State of the Union address Feb. 24, 2026, President Trump unveiled the “ratepayer protection pledge” aimed at shielding consumers from rising electricity costs tied to AI data centers. (Nathan Posner/Anadolu via Getty Images)
Why AI is driving a surge in electricity demand
AI systems require enormous computing power. That computing power requires enormous electricity. Today’s data centers can consume as much power as a small city. As AI tools expand across business, healthcare, finance and consumer apps, energy demand has risen sharply in certain regions.
Utilities have warned that the current grid in many parts of the country was not built for this level of concentrated demand. Upgrading substations, transmission lines and generation capacity costs money. Traditionally, those costs can influence rates paid by homes and small businesses. That is where the pledge comes in.
What the ratepayer protection pledge is designed to do
Under the ratepayer protection pledge, large technology companies would:
- Cover the full cost of additional electricity tied to their data centers
- Build their own on-site power generation to reduce strain on the public grid
Supporters say this approach separates residential energy costs from large-scale AI expansion. In other words, your household bill should not rise simply because a new AI data center opens nearby. So far, Anthropic is the clearest public backer. CyberGuy reached out to Anthropic for a comment on its role in the pledge. A company spokesperson referred us to a tweet from Anthropic Head of External Affairs Sarah Heck.
“American families shouldn’t pick up the tab for AI,” Heck wrote in a post on X. “In support of the White House ratepayer protection pledge, Anthropic has committed to covering 100% of electricity price increases that consumers face from our data centers.”
That makes Anthropic one of the first major AI companies to publicly state it will absorb consumer electricity price increases tied to its data center operations. Other major firms may be close behind. The White House reportedly plans to host Microsoft, Meta and Anthropic in early March to discuss formalizing a broader deal, though attendance and final terms have not been confirmed publicly.
Microsoft also expressed support for the initiative.
“The ratepayer protection pledge is an important step,” Brad Smith, Microsoft vice chair and president, said in a statement to CyberGuy. “We appreciate the administration’s work to ensure that data centers don’t contribute to higher electricity prices for consumers.”
Industry groups also point to companies such as Google and utilities including Duke Energy and Georgia Power as making consumer-focused commitments tied to data center growth. However, enforcement mechanisms and long-term regulatory details remain unclear.
CHINA VS SPACEX IN RACE FOR SPACE AI DATA CENTERS
The White House plans talks with Microsoft, Meta and Anthropic about shifting AI energy costs away from consumers. (Eli Hiller/For The Washington Post via Getty Images)
How this could change the economics of AI
AI infrastructure is already one of the most expensive technology buildouts in history. Companies are investing billions in chips, servers and real estate. If firms must also finance dedicated power plants or pay premium rates for grid upgrades, the cost of running AI systems increases further. That could lead to:
- Slower expansion in some markets
- Greater investment in renewable energy and storage
- More partnerships between tech firms and utilities
Energy strategy may become just as important as computing strategy. For consumers, this shift signals that electricity is now a central part of the AI conversation. AI is no longer only about software. It is also about infrastructure.
The bigger consumer tech picture
AI is becoming embedded in smartphones, search engines, office software and home devices. As adoption grows, so does the hidden infrastructure supporting it. Energy is now part of the conversation around everyday technology. Every AI-generated image, voice command or cloud backup depends on a power-hungry network of servers.
By asking companies to account more directly for their electricity use, policymakers are acknowledging a new reality. The digital world runs on very physical resources. For you, that shift could mean more transparency. It also raises new questions about sustainability, local impact and long-term costs.
ARTIFICIAL INTELLIGENCE HELPS FUEL NEW ENERGY SOURCES
As AI expansion strains the grid, a new proposal would require tech firms to fund their own power needs. (Sameer Al-Doumy/AFP via Getty Images)
What this means for you
If you are a homeowner or renter, the practical question is simple. Will this protect my electric bill? In theory, separating data center energy costs from residential rates could reduce the risk of price spikes tied to AI growth. If companies fund their own generation or grid upgrades, utilities may have less reason to spread those costs among all customers.
That said, utility pricing is complex. It depends on state regulators, long-term planning and local energy markets.
Here is what you can watch for in your area:
- New data center construction announcements
- Utility filings that mention large commercial load growth
- Public service commission decisions on rate adjustments
Even if you rarely use AI tools, your community could feel the effects of a nearby data center. The pledge is intended to keep those large-scale power demands from showing up in your monthly bill.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
The ratepayer protection pledge highlights an important turning point. AI is no longer only about innovation and speed. It is also about energy and accountability. If tech companies truly absorb the cost of their expanding power needs, households may avoid some of the financial strain tied to rapid AI growth. If not, utility bills could become an unexpected front line in the AI era.
As AI tools become part of daily life, how much extra power are you willing to support to keep them running? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Here’s your first look at Kratos in Amazon’s God of War show
Amazon has slowly been teasing out casting details for its live-action adaptation of God of War, and now we have our first look at the show. It’s a single image but a notable one showing protagonist Kratos and his son Atreus. The characters are played by Ryan Hurst and Callum Vinson, respectively, and they look relatively close to their video game counterparts.
There aren’t a lot of other details about the show just yet, but this is Amazon’s official description:
The God of War series storyline follows father and son Kratos and Atreus as they embark on a journey to spread the ashes of their wife and mother, Faye. Through their adventures, Kratos tries to teach his son to be a better god, while Atreus tries to teach his father how to be a better human.
That sounds a lot like the recent soft reboot of the franchise, which started with 2018’s God of War and continued through Ragnarök in 2022. For the Amazon series, Ronald D. Moore, best-known for his work on For All Mankind and Battlestar Galactica, will serve as showrunner. The rest of the cast includes: Mandy Patinkin (Odin), Ed Skrein (Baldur), Max Parker (Heimdall), Ólafur Darri Ólafsson (Thor), Teresa Palmer (Sif), Alastair Duncan (Mimir), Jeff Gulka (Sindri), and Danny Woodburn (Brok).
While production is underway on the God of War series, there’s no word on when it might start streaming.
-
World2 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts3 days agoMother and daughter injured in Taunton house explosion
-
Montana1 week ago2026 MHSA Montana Wrestling State Championship Brackets And Results – FloWrestling
-
Louisiana5 days agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Denver, CO2 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Technology7 days agoYouTube TV billing scam emails are hitting inboxes
-
Technology7 days agoStellantis is in a crisis of its own making
-
Politics7 days agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT