Connect with us

Technology

Millions of AI chat messages exposed in app data leak

Published

on

Millions of AI chat messages exposed in app data leak

NEWYou can now listen to Fox News articles!

A popular mobile app called Chat & Ask AI has more than 50 million users across the Google Play Store and Apple App Store. Now, an independent security researcher says the app exposed hundreds of millions of private chatbot conversations online. 

The exposed messages reportedly included deeply personal and disturbing requests. Users asked questions like how to painlessly kill themselves, how to write suicide notes, how to make meth and how to hack other apps. 

These were not harmless prompts. They were full chat histories tied to real users.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

HOW TECH IS BEING USED IN NANCY GUTHRIE DISAPPEARANCE INVESTIGATION

Security researchers say Chat & Ask AI exposed hundreds of millions of private chatbot messages, including complete conversation histories tied to real users. (Neil Godwin/Getty Images)

What exactly was exposed

The issue was discovered by a security researcher who goes by Harry. He found that Chat & Ask AI had a misconfigured backend using Google Firebase, a popular mobile app development platform. Because of that misconfiguration, it was easy for outsiders to gain authenticated access to the app’s database. Harry says he was able to access roughly 300 million messages tied to more than 25 million users. He analyzed a smaller sample of about 60,000 users and more than one million messages to confirm the scope.

The exposed data reportedly included:

  • Full chat histories with the AI
  • Timestamps for each conversation
  • The custom name users gave the chatbot
  • How users configured the AI model
  • Which AI model was selected

That matters because many users treat AI chats like private journals, therapists or brainstorming partners.

How this AI app stores so much sensitive user data

Chat & Ask AI is not a standalone artificial intelligence model. It acts as a wrapper that lets users talk to large language models built by bigger companies. Users could choose between models from OpenAI, Anthropic and Google, including ChatGPT, Claude and Gemini. While those companies operate the underlying models, Chat & Ask AI handles the storage. That is where things went wrong. Cybersecurity experts say this type of Firebase misconfiguration is a well-known weakness. It is also easy to find if someone knows what to look for.

Advertisement

We reached out to Codeway, which publishes the Chat & Ask AI app, for comment, but did not receive a response before publication.

149 MILLION PASSWORDS EXPOSED IN MASSIVE CREDENTIAL LEAK

The exposed database reportedly included timestamps, model settings and the names users gave their chatbots, revealing far more than isolated prompts. (Elisa Schu/Getty Images)

Why this matters to everyday users

Many people assume their chats with AI tools are private. They type things they would never post publicly or even say out loud. When an app stores that data insecurely, it becomes a gold mine for attackers. Even without names attached, chat histories can reveal mental health struggles, illegal behavior, work secrets and personal relationships. Once exposed, that data can be copied, scraped and shared forever.

YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT

Advertisement

Because the app handled data storage itself, a simple Firebase misconfiguration made sensitive AI chats accessible to outsiders, according to the researcher. (Edward Berthelot/Getty)

Ways to stay safe when using AI apps

You do not need to stop using AI tools to protect yourself. A few informed choices can lower your risk while still letting you use these apps when they are helpful.

1) Be mindful of sensitive topics

AI chats can feel private, especially when you are stressed, curious or looking for answers. However, not all apps handle conversations securely. Before sharing deeply personal struggles, medical concerns, financial details or questions that could create legal risk if exposed, take time to understand how the app stores protects your data. If those protections are unclear, consider safer alternatives such as trusted professionals or services with stronger privacy controls.

2) Research the app before installing

Look beyond download counts and star ratings. Check who operates the app, how long it has been available, and whether its privacy policy clearly explains how user data is stored and protected.

3) Assume conversations may be stored

Even when an app claims privacy, many AI tools log conversations for troubleshooting or model improvement. Treat chats as potentially permanent records rather than temporary messages.

Advertisement

4) Limit account linking and sign-ins

Some AI apps allow you to sign in with Google, Apple, or an email account. While convenient, this can directly connect chat histories to your real identity. When possible, avoid linking AI tools to primary accounts used for work, banking or personal communication.

5) Review app permissions and data controls

AI apps may request access beyond what is required to function. Review permissions carefully and disable anything that is not essential. If the app offers options to delete chat history, limit data retention or turn off syncing, enable those settings.

6) Use a data removal service

Your digital footprint extends beyond AI apps. Anyone can find personal details about you with a simple Google search, including your phone number, home address, date of birth and Social Security number. Marketers buy this information to target ads. In more serious cases, scammers and identity thieves breach data brokers, leaving personal data exposed or circulating on the dark web. Using a data removal service helps reduce what can be linked back to you if a breach occurs.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Advertisement

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

Kurt’s key takeaways

AI chat apps are moving fast, but security is still lagging behind. This incident shows how a single configuration mistake can expose millions of deeply personal conversations. Until stronger protections become standard, you need to treat AI chats with caution and limit what you share. The convenience is real, but so is the risk.

Do you assume your AI chats are private, or has this story changed how much you are willing to share with these apps? Let us know your thoughts by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Copyright 2026 CyberGuy.com. All rights reserved.

Technology

Valve’s huge SteamOS 3.8 update adds long-awaited features — and supports Steam Machine

Published

on

Valve’s huge SteamOS 3.8 update adds long-awaited features — and supports Steam Machine

Not only is it the first release to support the upcoming Steam Machine living room gaming PC, it comes with long-awaited features for Valve’s handhelds and more support for other companies’ handhelds than we’ve seen to date — including Microsoft and Asus’ Xbox Ally series, the Lenovo Legion Go 2, the OneXPlayer X1, and additional support for MSI, GPD, Anbernic, OrangePi, and Zotac.

The one that excites me most: Valve is adding genuine hibernation and “memory power down” modes to the Steam Deck — though just the LCD model to start — which should help extend battery life when you hit the power button or leave them idle. Some Windows machines currently last longer than the Steam Deck when asleep, because they self-hibernate to save power, while the Steam Deck has an instant-on sleep mode.

Plus, Valve has finally added a setting in its gaming mode to let you use your Bluetooth headset microphones — something I’ve been asking for since the beginning. (Valve did add it to the Linux desktop mode last year.) And the Steam Deck LCD is finally getting Bluetooth Wake re-enabled, so you can turn on your TV-connected Deck with a wireless controller from your couch.

The update comes with all sorts of improvements for the Linux desktop modes that sound like they’ll come in handy on a Steam Machine plugged into a TV or monitor, too, including desktop HDR, VRR display support, per-display scaling, “improved windowing behavior for games running in Proton,” and an upgrade to KDE Plasma 6.4.3 among other things.

And for a Steam Machine or Steam handheld plugged into a home entertainment system, they can now detect how many audio channels you have over HDMI to enable surround sound. (I believe surround sound was already a thing, so perhaps this is just a different and better automatic implementation.)

Advertisement

There’s also a new Arch system base and an updated graphics driver.

Perhaps most surprisingly, the “Non-Deck” section of the changelog is huge. Valve says long-pressing your power button should work “across a wide variety of devices” to power off, restart, or switch to the desktop mode. You should be able to change your processor’s power modes on the Xbox Ally now, and night mode and screen color settings should work on AMD Z2 Extreme handhelds in general.

There’s also “Greatly improved video memory management with discrete GPU platforms,” you can limit how far the battery charges in any of the Lenovo Legion Go handhelds (in desktop mode), and it should fix “washed out colors for Zotac and OneXPlayer handhelds with OLED.”

There’s a lot in this update, and it’s possible I missed a feature you care about, so check out the whole changelog here and below.

Advertisement
Continue Reading

Technology

Fox News AI Newsletter: Wall-climbing robots swarm US Navy warships

Published

on

Fox News AI Newsletter: Wall-climbing robots swarm US Navy warships

NEWYou can now listen to Fox News articles!

Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

IN TODAY’S NEWSLETTER:

WATCH: Wall-climbing robot swarms crawl US Navy warships as China’s fleet surges

OPINION: AI comes with a hefty charge, and you are the one who gets stuck with the bill

Advertisement

Dell workforce shrinks 10% for third consecutive year

Swarms of wall-climbing robots will soon be crawling across U.S. Navy warships in a $71 million effort to slash repair delays and boost fleet readiness as China continues expanding its naval power.  (Gecko Robotics )

TECH AT SEA: WATCH: wall-climbing robot swarms crawl US Navy warships as China’s fleet surgesFox News Digital reports on a new development in naval technology, featuring wall-climbing robot swarms that are crawling on U.S. Navy warships. This advancement comes at a critical time in defense politics as China’s naval fleet continues to surge in size and capability.

WALLET SHOCK: OPINION: AI comes with a hefty charge, and you are the one who gets stuck with the bill – In this opinion piece, the author discusses the economic implications of the growing artificial intelligence industry. The article argues that the hefty costs associated with AI development and its massive energy infrastructure will ultimately be passed down, leaving everyday consumers to foot the bill.

Dell Technologies headquarters in Round Rock, Texas, US, on Sunday, Nov. 26, 2023.  (Sergio Flores/Bloomberg via Getty Images)

Advertisement

COST CRUNCH: Dell workforce shrinks 10% for third consecutive year – Fox Business reports that Dell’s workforce has shrunk by ten percent. This marks the third consecutive year of workforce reductions for the major technology company amid shifting economic conditions and corporate restructuring.

AIMING HIGH: FULL AUTONOMY: AI pilot technology advances towards military capability – Merlin CEO Matt George details how the company is using artificial intelligence to enable military and commercial aircraft to operate fully autonomously on Fox Business’ ‘The Claman Countdown.’

Single family homes in a residential neighborhood in San Marcos, Texas, US, on Tuesday, March 12, 2024. (Photographer: Jordan Vonderhaar/Bloomberg via Getty Images)

SHOULD I BUY?: Homebuyers, sellers turning to AI chatbots for advice – Prairie Operating Co.’s Lou Basenese and real estate broker Kirsten Jordan discuss how artificial intelligence is impacting homebuyers and sellers on ‘Fox Business In Depth.’

DISRUPTION IS HERE: Charles Payne: AI disruption is here – Fox Business host Charles Payne discusses the economic impact of the rise in artificial intelligence on ‘Making Money.’

Advertisement

BUILDING HER BUSINESS: How Angie Hicks turned Angi into a home services giant and AI player – Angi co-founder Angie Hicks discusses entrepreneurship, company growth and how she built out her business on ‘Mornings with Maria.’

FOLLOW FOX NEWS ON SOCIAL MEDIA

Facebook
Instagram
YouTube
X
LinkedIn

SIGN UP FOR OUR OTHER NEWSLETTERS

Fox News First
Fox News Opinion
Fox News Lifestyle
Fox News Health

DOWNLOAD OUR APPS

Fox News
Fox Business
Fox Weather
Fox Sports
Tubi

WATCH FOX NEWS ONLINE

Fox News Go

Advertisement

STREAM FOX NATION

Fox Nation

Stay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

Advertisement
Continue Reading

Technology

A rogue AI led to a serious security incident at Meta

Published

on

A rogue AI led to a serious security incident at Meta

For almost two hours last week, Meta employees had unauthorized access to company and user data thanks to an AI agent that gave an employee inaccurate technical advice, as previously reported by The Information. Meta spokesperson Tracy Clayton said in a statement to The Verge that “no user data was mishandled” during the incident.

A Meta engineer was using an internal AI agent, which Clayton described as “similar in nature to OpenClaw within a secure development environment,” to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly.

An employee then acted on the AI’s advice, which “provided inaccurate information” that led to a “SEV1” level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.

According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information — and it’s not clear whether the employee who originally prompted the answer planned to post it publicly.

“The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee’s own reply on that thread,” Clayton commented to The Verge. “The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided.”

Advertisement

Last month, an AI agent from open source platform OpenClaw went more directly rogue at Meta when an employee asked it to sort through emails in her inbox, deleting emails without permission. The whole idea behind agents like OpenClaw is that they can take action on their own, but like any other AI model, they don’t always interpret prompts and instructions correctly or give accurate responses, a fact Meta employees have now discovered twice.

Continue Reading
Advertisement

Trending