Technology
Millions of AI chat messages exposed in app data leak
NEWYou can now listen to Fox News articles!
A popular mobile app called Chat & Ask AI has more than 50 million users across the Google Play Store and Apple App Store. Now, an independent security researcher says the app exposed hundreds of millions of private chatbot conversations online.
The exposed messages reportedly included deeply personal and disturbing requests. Users asked questions like how to painlessly kill themselves, how to write suicide notes, how to make meth and how to hack other apps.
These were not harmless prompts. They were full chat histories tied to real users.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
HOW TECH IS BEING USED IN NANCY GUTHRIE DISAPPEARANCE INVESTIGATION
Security researchers say Chat & Ask AI exposed hundreds of millions of private chatbot messages, including complete conversation histories tied to real users. (Neil Godwin/Getty Images)
What exactly was exposed
The issue was discovered by a security researcher who goes by Harry. He found that Chat & Ask AI had a misconfigured backend using Google Firebase, a popular mobile app development platform. Because of that misconfiguration, it was easy for outsiders to gain authenticated access to the app’s database. Harry says he was able to access roughly 300 million messages tied to more than 25 million users. He analyzed a smaller sample of about 60,000 users and more than one million messages to confirm the scope.
The exposed data reportedly included:
- Full chat histories with the AI
- Timestamps for each conversation
- The custom name users gave the chatbot
- How users configured the AI model
- Which AI model was selected
That matters because many users treat AI chats like private journals, therapists or brainstorming partners.
How this AI app stores so much sensitive user data
Chat & Ask AI is not a standalone artificial intelligence model. It acts as a wrapper that lets users talk to large language models built by bigger companies. Users could choose between models from OpenAI, Anthropic and Google, including ChatGPT, Claude and Gemini. While those companies operate the underlying models, Chat & Ask AI handles the storage. That is where things went wrong. Cybersecurity experts say this type of Firebase misconfiguration is a well-known weakness. It is also easy to find if someone knows what to look for.
We reached out to Codeway, which publishes the Chat & Ask AI app, for comment, but did not receive a response before publication.
149 MILLION PASSWORDS EXPOSED IN MASSIVE CREDENTIAL LEAK
The exposed database reportedly included timestamps, model settings and the names users gave their chatbots, revealing far more than isolated prompts. (Elisa Schu/Getty Images)
Why this matters to everyday users
Many people assume their chats with AI tools are private. They type things they would never post publicly or even say out loud. When an app stores that data insecurely, it becomes a gold mine for attackers. Even without names attached, chat histories can reveal mental health struggles, illegal behavior, work secrets and personal relationships. Once exposed, that data can be copied, scraped and shared forever.
YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT
Because the app handled data storage itself, a simple Firebase misconfiguration made sensitive AI chats accessible to outsiders, according to the researcher. (Edward Berthelot/Getty)
Ways to stay safe when using AI apps
You do not need to stop using AI tools to protect yourself. A few informed choices can lower your risk while still letting you use these apps when they are helpful.
1) Be mindful of sensitive topics
AI chats can feel private, especially when you are stressed, curious or looking for answers. However, not all apps handle conversations securely. Before sharing deeply personal struggles, medical concerns, financial details or questions that could create legal risk if exposed, take time to understand how the app stores protects your data. If those protections are unclear, consider safer alternatives such as trusted professionals or services with stronger privacy controls.
2) Research the app before installing
Look beyond download counts and star ratings. Check who operates the app, how long it has been available, and whether its privacy policy clearly explains how user data is stored and protected.
3) Assume conversations may be stored
Even when an app claims privacy, many AI tools log conversations for troubleshooting or model improvement. Treat chats as potentially permanent records rather than temporary messages.
4) Limit account linking and sign-ins
Some AI apps allow you to sign in with Google, Apple, or an email account. While convenient, this can directly connect chat histories to your real identity. When possible, avoid linking AI tools to primary accounts used for work, banking or personal communication.
5) Review app permissions and data controls
AI apps may request access beyond what is required to function. Review permissions carefully and disable anything that is not essential. If the app offers options to delete chat history, limit data retention or turn off syncing, enable those settings.
6) Use a data removal service
Your digital footprint extends beyond AI apps. Anyone can find personal details about you with a simple Google search, including your phone number, home address, date of birth and Social Security number. Marketers buy this information to target ads. In more serious cases, scammers and identity thieves breach data brokers, leaving personal data exposed or circulating on the dark web. Using a data removal service helps reduce what can be linked back to you if a breach occurs.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Kurt’s key takeaways
AI chat apps are moving fast, but security is still lagging behind. This incident shows how a single configuration mistake can expose millions of deeply personal conversations. Until stronger protections become standard, you need to treat AI chats with caution and limit what you share. The convenience is real, but so is the risk.
Do you assume your AI chats are private, or has this story changed how much you are willing to share with these apps? Let us know your thoughts by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Defense secretary Pete Hegseth designates Anthropic a supply chain risk
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.
Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.
Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.
In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.
America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
Technology
What Trump’s ‘ratepayer protection pledge’ means for you
NEWYou can now listen to Fox News articles!
When you open a chatbot, stream a show or back up photos to the cloud, you are tapping into a vast network of data centers. These facilities power artificial intelligence, search engines and online services we use every day. Now there is a growing debate over who should pay for the electricity those data centers consume.
During President Trump’s State of the Union address this week, he introduced a new initiative called the “ratepayer protection pledge” to shift AI-driven electricity costs away from consumers. The core idea is simple.
Tech companies that run energy-intensive AI data centers should cover the cost of the extra electricity they require rather than passing those costs on to everyday customers through higher utility rates.
It sounds simple. The hard part is what happens next.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
At the State of the Union address Feb. 24, 2026, President Trump unveiled the “ratepayer protection pledge” aimed at shielding consumers from rising electricity costs tied to AI data centers. (Nathan Posner/Anadolu via Getty Images)
Why AI is driving a surge in electricity demand
AI systems require enormous computing power. That computing power requires enormous electricity. Today’s data centers can consume as much power as a small city. As AI tools expand across business, healthcare, finance and consumer apps, energy demand has risen sharply in certain regions.
Utilities have warned that the current grid in many parts of the country was not built for this level of concentrated demand. Upgrading substations, transmission lines and generation capacity costs money. Traditionally, those costs can influence rates paid by homes and small businesses. That is where the pledge comes in.
What the ratepayer protection pledge is designed to do
Under the ratepayer protection pledge, large technology companies would:
- Cover the full cost of additional electricity tied to their data centers
- Build their own on-site power generation to reduce strain on the public grid
Supporters say this approach separates residential energy costs from large-scale AI expansion. In other words, your household bill should not rise simply because a new AI data center opens nearby. So far, Anthropic is the clearest public backer. CyberGuy reached out to Anthropic for a comment on its role in the pledge. A company spokesperson referred us to a tweet from Anthropic Head of External Affairs Sarah Heck.
“American families shouldn’t pick up the tab for AI,” Heck wrote in a post on X. “In support of the White House ratepayer protection pledge, Anthropic has committed to covering 100% of electricity price increases that consumers face from our data centers.”
That makes Anthropic one of the first major AI companies to publicly state it will absorb consumer electricity price increases tied to its data center operations. Other major firms may be close behind. The White House reportedly plans to host Microsoft, Meta and Anthropic in early March to discuss formalizing a broader deal, though attendance and final terms have not been confirmed publicly.
Microsoft also expressed support for the initiative.
“The ratepayer protection pledge is an important step,” Brad Smith, Microsoft vice chair and president, said in a statement to CyberGuy. “We appreciate the administration’s work to ensure that data centers don’t contribute to higher electricity prices for consumers.”
Industry groups also point to companies such as Google and utilities including Duke Energy and Georgia Power as making consumer-focused commitments tied to data center growth. However, enforcement mechanisms and long-term regulatory details remain unclear.
CHINA VS SPACEX IN RACE FOR SPACE AI DATA CENTERS
The White House plans talks with Microsoft, Meta and Anthropic about shifting AI energy costs away from consumers. (Eli Hiller/For The Washington Post via Getty Images)
How this could change the economics of AI
AI infrastructure is already one of the most expensive technology buildouts in history. Companies are investing billions in chips, servers and real estate. If firms must also finance dedicated power plants or pay premium rates for grid upgrades, the cost of running AI systems increases further. That could lead to:
- Slower expansion in some markets
- Greater investment in renewable energy and storage
- More partnerships between tech firms and utilities
Energy strategy may become just as important as computing strategy. For consumers, this shift signals that electricity is now a central part of the AI conversation. AI is no longer only about software. It is also about infrastructure.
The bigger consumer tech picture
AI is becoming embedded in smartphones, search engines, office software and home devices. As adoption grows, so does the hidden infrastructure supporting it. Energy is now part of the conversation around everyday technology. Every AI-generated image, voice command or cloud backup depends on a power-hungry network of servers.
By asking companies to account more directly for their electricity use, policymakers are acknowledging a new reality. The digital world runs on very physical resources. For you, that shift could mean more transparency. It also raises new questions about sustainability, local impact and long-term costs.
ARTIFICIAL INTELLIGENCE HELPS FUEL NEW ENERGY SOURCES
As AI expansion strains the grid, a new proposal would require tech firms to fund their own power needs. (Sameer Al-Doumy/AFP via Getty Images)
What this means for you
If you are a homeowner or renter, the practical question is simple. Will this protect my electric bill? In theory, separating data center energy costs from residential rates could reduce the risk of price spikes tied to AI growth. If companies fund their own generation or grid upgrades, utilities may have less reason to spread those costs among all customers.
That said, utility pricing is complex. It depends on state regulators, long-term planning and local energy markets.
Here is what you can watch for in your area:
- New data center construction announcements
- Utility filings that mention large commercial load growth
- Public service commission decisions on rate adjustments
Even if you rarely use AI tools, your community could feel the effects of a nearby data center. The pledge is intended to keep those large-scale power demands from showing up in your monthly bill.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
The ratepayer protection pledge highlights an important turning point. AI is no longer only about innovation and speed. It is also about energy and accountability. If tech companies truly absorb the cost of their expanding power needs, households may avoid some of the financial strain tied to rapid AI growth. If not, utility bills could become an unexpected front line in the AI era.
As AI tools become part of daily life, how much extra power are you willing to support to keep them running? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Here’s your first look at Kratos in Amazon’s God of War show
Amazon has slowly been teasing out casting details for its live-action adaptation of God of War, and now we have our first look at the show. It’s a single image but a notable one showing protagonist Kratos and his son Atreus. The characters are played by Ryan Hurst and Callum Vinson, respectively, and they look relatively close to their video game counterparts.
There aren’t a lot of other details about the show just yet, but this is Amazon’s official description:
The God of War series storyline follows father and son Kratos and Atreus as they embark on a journey to spread the ashes of their wife and mother, Faye. Through their adventures, Kratos tries to teach his son to be a better god, while Atreus tries to teach his father how to be a better human.
That sounds a lot like the recent soft reboot of the franchise, which started with 2018’s God of War and continued through Ragnarök in 2022. For the Amazon series, Ronald D. Moore, best-known for his work on For All Mankind and Battlestar Galactica, will serve as showrunner. The rest of the cast includes: Mandy Patinkin (Odin), Ed Skrein (Baldur), Max Parker (Heimdall), Ólafur Darri Ólafsson (Thor), Teresa Palmer (Sif), Alastair Duncan (Mimir), Jeff Gulka (Sindri), and Danny Woodburn (Brok).
While production is underway on the God of War series, there’s no word on when it might start streaming.
-
World2 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts3 days agoMother and daughter injured in Taunton house explosion
-
Montana1 week ago2026 MHSA Montana Wrestling State Championship Brackets And Results – FloWrestling
-
Louisiana5 days agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Denver, CO2 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Technology7 days agoYouTube TV billing scam emails are hitting inboxes
-
Technology7 days agoStellantis is in a crisis of its own making
-
Politics7 days agoOpenAI didn’t contact police despite employees flagging mass shooter’s concerning chatbot interactions: REPORT