Technology
AI-powered scams target kids while parents stay silent
NEWYou can now listen to Fox News articles!
Kids are spending more time online than ever, and that early exposure is opening the door to a new kind of danger.
Artificial intelligence has supercharged online scams, creating personalized and convincing traps that even adults can fall for. The latest Bitwarden “Cybersecurity Awareness Month 2025” poll shows that while parents know these risks exist, most still haven’t had a serious talk with their children about them.
This growing communication gap is leaving the youngest internet users vulnerable at a time when online safety depends more than ever on education and oversight.
Young children face real risks online
Children as young as preschool age are now part of the connected world, yet few truly understand how to stay safe. The Bitwarden survey found that 42% of parents with children between 3 and 5 years old said their child had accidentally shared personal information online.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
5 PHONE SAFETY TIPS EVERY PARENT SHOULD KNOW
AI-powered scams are finding new ways to reach kids who go online earlier than ever. (Kurt “CyberGuy” Knutsson)
Nearly 80% of kids between the ages of 3 and 12 already have their own tablet or another connected device. Many parents assume supervision software or family settings are enough, but that assumption breaks down when kids explore apps, games and chat spaces designed to hold their attention. Device access has become nearly universal by early elementary school, but meaningful supervision and honest safety conversations are lagging behind.
The AI threat and the parental disconnect
Artificial intelligence has changed the nature of online scams by making them sound familiar, personal and hard to recognize. Bitwarden’s data shows that 78% of parents worry their child could fall for an AI-enhanced threat, such as a voice-cloned message or a fake chat with a friend. Despite that fear, almost half of those same parents haven’t talked with their kids about what an AI-powered scam might look like. The disconnect is even stronger among Gen Z parents.
About 80% of them say they are afraid their child will fall victim to an AI-based scheme, yet 37% allow their kids full or nearly full autonomy online. In those households, problems are more common. Malware infections, unauthorized in-app purchases and phishing attempts appear at the highest rates among families who worry the most but monitor the least. The paradox is clear. Parents recognize the threat but fail to translate awareness into consistent action.
Why parents haven’t had the talk
There are many reasons this important talk keeps getting delayed. Some parents simply feel unprepared to explain AI, while others assume their existing safety tools will protect their children. Only 17% of parents in the United States actively seek information about AI technologies, according to related research by Barna Group. That leaves a large majority relying on partial knowledge or outdated advice.
Many parents also juggle multiple devices at home, making it difficult to track every app or game their child uses. Some overestimate how safe their own habits are, even though they admit to reusing passwords or skipping security updates. Without firsthand understanding or personal discipline, it becomes even harder to teach those lessons to children. As a result, many kids face the internet with curiosity but without proper guidance.
Smart ways to protect your child online
The Bitwarden findings make one thing clear: kids are getting connected younger, and scams powered by artificial intelligence are already targeting them. The good news is that parents can take practical steps right now to reduce those risks and build lasting online safety habits.
1) Keep devices where you can see them
Set up tablets, laptops and gaming consoles in shared family areas rather than bedrooms. When screens stay visible, you naturally become part of your child’s online world. This not only encourages open conversation but also helps spot suspicious messages, fake friend requests or scam links before they cause trouble.
Staying involved in your child’s digital life is the best defense against today’s AI threats. (Kurt “CyberGuy” Knutsson)
2) Use built-in parental controls
Most devices have strong tools you can activate in minutes. Apple’s Screen Time and Google Family Link let you limit screen time, approve new app installs and monitor how long your child spends on specific apps. These controls are especially useful for younger kids who, according to the Bitwarden poll, often have little supervision despite heavy device use.
TEENS TURNING TO AI FOR LOVE AND COMFORT
3) Talk through every download
Before your child installs a new game or app, take a moment to check it together. Read the reviews, look at what data it collects and confirm the developer’s name. Explain why some games or “free” apps might ask for camera or contact access they don’t need. This kind of shared review teaches healthy skepticism and helps children recognize red flags later on.
4) Make password strength and 2FA a family rule
AI scams thrive on weak or reused passwords. Use a password manager to create and store strong, unique logins for each account. Turn on two-factor authentication (2FA) wherever possible so that even if a password is stolen, the account stays protected. Let your kids see how you use these tools so they learn that security isn’t complicated, it’s just a habit.
Many parents delay important online safety talks because they feel unprepared to explain AI, leaving kids curious but without the guidance they need to stay safe. (Kurt “CyberGuy” Knutsson)
Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
5) Teach them to stop and tell
One of the best defenses is simple: encourage your child to pause and talk before reacting to anything unusual online. Whether it’s a pop-up claiming a prize, a strange link in a chat or a voice message that sounds familiar, remind them it’s always okay to ask you first. Quick conversations like these can prevent costly mistakes and turn learning moments into trust-building ones.
6) Keep devices updated and use strong antivirus software
Outdated software can leave gaps that scammers exploit. Regularly update operating systems, browsers and apps to close those holes. Add strong antivirus software. Explain to your child that updates and scans keep their favorite games and videos running safely, not just their parents happy.
The best way to safeguard from malicious links that install malware, potentially accessing private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.
7) Make online safety part of everyday life
Don’t save these conversations for when something goes wrong. Bring them up casually during family time or when watching YouTube or gaming together. Treat digital safety like any other life skill, something practiced daily and improved with time. The more normal it feels, the more confident your child becomes when facing online risks.
Talking about online safety early helps build trust and awareness before trouble starts. (Kurt “CyberGuy” Knutsson)
What this means for you
If you are a parent, guardian or anyone helping a child use technology, this issue deserves your attention. Start talking early, even before your child begins exploring the web on their own. Teach them simple concepts like asking before clicking or sharing. Instead of relying only on parental controls, have ongoing conversations that help them recognize suspicious links, messages or pop-ups. Show them that cybersecurity isn’t about fear but about awareness. Model strong digital habits at home by using unique passwords and turning on two-factor authentication. Explain why those steps matter. When your child understands the reasoning behind the rules, they are more likely to follow them. Make technology part of your family routine rather than a private space your child navigates alone. Regularly check the apps they use and the people they interact with. Set clear expectations and age-appropriate boundaries that can grow with your child’s experience. Staying engaged is the most powerful protection you can offer.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
The numbers from Bitwarden show a clear warning sign. Concern among parents is high, yet actual conversations about AI-powered scams remain rare. That silence gives scammers the upper hand. Children who learn about online safety early are more confident, more cautious and better equipped to handle unexpected messages or fake alerts. It only takes a few minutes of honest conversation to create awareness that lasts for years. By taking action now, you can close the gap between fear and understanding, protecting your family in a digital world that changes every day.
Are you ready to start the conversation that could keep your child from becoming the next target of an AI-powered scam? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Google is expanding AirDrop support to more Android devices ‘very soon’
After introducing AirDrop support to Pixel 10 devices last year, Google is now set to expand it to phones made by other Android partners. Eric Kay, vice president of engineering for Android, confirmed in a press briefing attended by Android Authority that “a lot more” Android devices will be able to use Quick Share to initiate AirDrop sessions with Apple devices this year.
“We spent a lot of time and energy to make sure that we could build something that was compatible not only with iPhone but iPads and MacBooks,” Kay said. “Now that we’ve proven it out, we’re working with our partners to expand it into the rest of the ecosystem, and you should see some exciting announcements coming very soon.”
Currently, Google’s Pixel 10 phones are the only Android devices that can use Quick Share — Android’s own wireless peer-to-peer transfer feature, previously known as Nearby Share — to communicate directly with Apple’s AirDrop. Google hasn’t outlined any specific Android partners or devices for the update yet, but both Nothing and chipmaker Qualcomm teased in November that support was coming.
Kay also discussed Google’s efforts to improve the process for iOS users who switch to Android, helping to prevent incomplete data transfers, lost messages, and other issues. Apple has been working on a “user-friendly” way of transferring data from iPhones to other devices since early 2024, and Google and Apple’s collaborative efforts were seen being tested in Android Canary 2512 for Pixel devices in December.
“We’re also going to be working to make it easy for people who do decide to switch to transfer their data and make sure they’ve got everything they had from their old phone,” Kay said during the same briefing. “So there’s a lot more going on with that.”
Technology
Millions of AI chat messages exposed in app data leak
NEWYou can now listen to Fox News articles!
A popular mobile app called Chat & Ask AI has more than 50 million users across the Google Play Store and Apple App Store. Now, an independent security researcher says the app exposed hundreds of millions of private chatbot conversations online.
The exposed messages reportedly included deeply personal and disturbing requests. Users asked questions like how to painlessly kill themselves, how to write suicide notes, how to make meth and how to hack other apps.
These were not harmless prompts. They were full chat histories tied to real users.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
HOW TECH IS BEING USED IN NANCY GUTHRIE DISAPPEARANCE INVESTIGATION
Security researchers say Chat & Ask AI exposed hundreds of millions of private chatbot messages, including complete conversation histories tied to real users. (Neil Godwin/Getty Images)
What exactly was exposed
The issue was discovered by a security researcher who goes by Harry. He found that Chat & Ask AI had a misconfigured backend using Google Firebase, a popular mobile app development platform. Because of that misconfiguration, it was easy for outsiders to gain authenticated access to the app’s database. Harry says he was able to access roughly 300 million messages tied to more than 25 million users. He analyzed a smaller sample of about 60,000 users and more than one million messages to confirm the scope.
The exposed data reportedly included:
- Full chat histories with the AI
- Timestamps for each conversation
- The custom name users gave the chatbot
- How users configured the AI model
- Which AI model was selected
That matters because many users treat AI chats like private journals, therapists or brainstorming partners.
How this AI app stores so much sensitive user data
Chat & Ask AI is not a standalone artificial intelligence model. It acts as a wrapper that lets users talk to large language models built by bigger companies. Users could choose between models from OpenAI, Anthropic and Google, including ChatGPT, Claude and Gemini. While those companies operate the underlying models, Chat & Ask AI handles the storage. That is where things went wrong. Cybersecurity experts say this type of Firebase misconfiguration is a well-known weakness. It is also easy to find if someone knows what to look for.
We reached out to Codeway, which publishes the Chat & Ask AI app, for comment, but did not receive a response before publication.
149 MILLION PASSWORDS EXPOSED IN MASSIVE CREDENTIAL LEAK
The exposed database reportedly included timestamps, model settings and the names users gave their chatbots, revealing far more than isolated prompts. (Elisa Schu/Getty Images)
Why this matters to everyday users
Many people assume their chats with AI tools are private. They type things they would never post publicly or even say out loud. When an app stores that data insecurely, it becomes a gold mine for attackers. Even without names attached, chat histories can reveal mental health struggles, illegal behavior, work secrets and personal relationships. Once exposed, that data can be copied, scraped and shared forever.
YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT
Because the app handled data storage itself, a simple Firebase misconfiguration made sensitive AI chats accessible to outsiders, according to the researcher. (Edward Berthelot/Getty)
Ways to stay safe when using AI apps
You do not need to stop using AI tools to protect yourself. A few informed choices can lower your risk while still letting you use these apps when they are helpful.
1) Be mindful of sensitive topics
AI chats can feel private, especially when you are stressed, curious or looking for answers. However, not all apps handle conversations securely. Before sharing deeply personal struggles, medical concerns, financial details or questions that could create legal risk if exposed, take time to understand how the app stores protects your data. If those protections are unclear, consider safer alternatives such as trusted professionals or services with stronger privacy controls.
2) Research the app before installing
Look beyond download counts and star ratings. Check who operates the app, how long it has been available, and whether its privacy policy clearly explains how user data is stored and protected.
3) Assume conversations may be stored
Even when an app claims privacy, many AI tools log conversations for troubleshooting or model improvement. Treat chats as potentially permanent records rather than temporary messages.
4) Limit account linking and sign-ins
Some AI apps allow you to sign in with Google, Apple, or an email account. While convenient, this can directly connect chat histories to your real identity. When possible, avoid linking AI tools to primary accounts used for work, banking or personal communication.
5) Review app permissions and data controls
AI apps may request access beyond what is required to function. Review permissions carefully and disable anything that is not essential. If the app offers options to delete chat history, limit data retention or turn off syncing, enable those settings.
6) Use a data removal service
Your digital footprint extends beyond AI apps. Anyone can find personal details about you with a simple Google search, including your phone number, home address, date of birth and Social Security number. Marketers buy this information to target ads. In more serious cases, scammers and identity thieves breach data brokers, leaving personal data exposed or circulating on the dark web. Using a data removal service helps reduce what can be linked back to you if a breach occurs.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Kurt’s key takeaways
AI chat apps are moving fast, but security is still lagging behind. This incident shows how a single configuration mistake can expose millions of deeply personal conversations. Until stronger protections become standard, you need to treat AI chats with caution and limit what you share. The convenience is real, but so is the risk.
Do you assume your AI chats are private, or has this story changed how much you are willing to share with these apps? Let us know your thoughts by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Republicans attack ‘woke’ Netflix — and ignore YouTube
When Netflix co-CEO Ted Sarandos entered the Senate office building on Tuesday, he got thrown a curveball. What started as a standard antitrust hearing relating to the Warner Bros. merger quickly devolved into a performative Republican attack about the spread of “woke” ideology on the streaming service. At the same time, arguably a much more influential platform was completely ignored: YouTube.
After grilling Sarandos about residual payments, Sen. Josh Hawley (R-MO) launched into a completely different line of questioning: “Why is it that so much of Netflix content for children promotes a transgender ideology?” Hawley asked, making an unsubstantiated claim that “almost half” of the platform’s children’s content contains so-called “transgender ideology.” The statement harkened to a pressure campaign launched by Elon Musk months ago in which he called on X users to unsubscribe from Netflix for having a “transgender woke agenda,” citing its few shows with trans characters — shows that were canceled years ago.
“Our business intent is to entertain the world,” Sarandos replied. “It is not to have a political agenda.” Still, other Republican lawmakers, including Sens. Ashley Moody (R-FL) and Eric Schmitt (R-MO), piled on, bringing up a post Netflix made following the murder of George Floyd, and the French film Cuties, which sparked a right-wing firestorm years ago. Sen. Ted Cruz (R-TX) even asked Sarandos what he thought about Billie Eilish’s “no one is illegal on stolen land” comment at the Grammys. It seemed like they were grasping at straws to support their narrative that Netflix’s acquisition of Warner Bros. could somehow poison the well of content for viewers.
“My concern is that you don’t share my values or those of many other American parents, and you want the United States government to allow you to become one of the largest — if not the largest — streaming monopolist in the world,” Hawley said. “I think we ought to be concerned about what content you’re promoting.”
While it’s true that Netflix will control a substantial portion of the streaming market when — and or if — it acquires Warner Bros. and its streaming service HBO Max, it’s hard to criticize Netflix without bringing up YouTube.
“YouTube is not just cat videos anymore. YouTube is TV.”
For years now, Netflix has been trying to topple YouTube as the most-watched streaming service. Data from Nielsen says Netflix made up 9 percent of total TV and streaming viewing in the US in December 2025, while Warner Bros. Discovery’s services made up 1.4 percent. Combining the two doesn’t even stack up to YouTube, which held a 12.7 percent share of viewership during that time. “YouTube is not just cat videos anymore,” Sarandos told the subcommittee. “YouTube is TV.”
Unlike Netflix, YouTube is free and has an ever-growing library of user-created content that doesn’t require it to spend billions of dollars in production costs and licensing fees. YouTube doesn’t have to worry about maintaining subscribers, as anyone with access to a web browser or phone can open up and watch YouTube. The setup brings YouTube a constant stream of viewers that it can rope in with a slew of content it can recommend to watch next.
But not all creators on YouTube are striving for quality. As my colleague Mia Sato wrote, YouTube is home to creators who try to feed an algorithm that boosts inflammatory content and attempts to hook viewers, in addition to an array of videos that may be less than ideal for kids.
Like it or not, YouTube is the dominant streamer, with an endless supply of potentially offensive agendas for just about anyone. But for some reason, it’s not the target of this culture war. If these lawmakers actually cared about what their kids are watching, maybe they’d start looking more closely at how YouTube prioritizes content. Or, if they don’t like the shows and movies on Netflix, they could just do what Sarandos suggested during the hearing: unsubscribe.
-
Indiana4 days ago13-year-old rider dies following incident at northwest Indiana BMX park
-
Massachusetts5 days agoTV star fisherman, crew all presumed dead after boat sinks off Massachusetts coast
-
Tennessee6 days agoUPDATE: Ohio woman charged in shooting death of West TN deputy
-
Indiana4 days ago13-year-old boy dies in BMX accident, officials, Steel Wheels BMX says
-
Politics1 week agoVirginia Democrats seek dozens of new tax hikes, including on dog walking and dry cleaning
-
Austin, TX7 days ago
TEA is on board with almost all of Austin ISD’s turnaround plans
-
Politics3 days agoTrump unveils new rendering of sprawling White House ballroom project
-
Texas5 days agoLive results: Texas state Senate runoff