Technology
This iPod prototype was hiding Apple’s unreleased Tetris clone
YouTuber Apple Demo has found a prototype third-generation iPod that contains a game called Stacker, which never made it to retail models. In addition to Apple’s own version of Tetris, the engineering sample iPod also came loaded with other unreleased titles, including games called Block0 and Klondike, as noted by Engadget.
On the back of the prototype iPod, a “DVT” (Design Validation Testing) label is etched where the storage capacity normally goes, which, Apple Demo explains indicates it was from the middle stage of development. Two songs still in its storage and a helpfully-named playlist suggest this device was used for battery testing.
After some tinkering and transplanting the internal hard disk into a second-generation iPod Apple Demo got the hard disk to boot as normal, and out of the games available, they only demoed Stacker.
They even contacted the ex-SVP of Apple’s iPod division, Tony Fadell, to learn why the Tetris clone was never released. However, Fadell’s only comment, from 2022, says, “because we added games with later software release,” leaving the internal story of Stacker a mystery for now. Apple did release a licensed Tetris game years later on the “Classic” iPod models, which supported new game titles purchased from the iTunes Store.
Stacker uses the iPod’s click wheel to move falling blocks left and right, and the center button drops them to the bottom of the screen. The objective, like Tetris, is to shoot for a high score by completing and clearing lines of bricks and not overstacking pieces off the top. The game isn’t entirely polished — there’s at least one bug shown in the video where a brick overlapped a stack and got stuck when rotated. But it works!
Technology
Fox News AI Newsletter: ‘The American people are being lied to about AI’
NEWYou can now listen to Fox News articles!
Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.
IN TODAY’S NEWSLETTER:
– Palantir’s Shyam Sankar: Americans are ‘being lied to’ about AI job displacement fears
– OPINION: Elon Musk says you can skip retirement savings in the age of AI. Not so fast
– Chevron CEO details strategy to shield consumers from soaring AI power costs
LIES EXPOSED: “The American people are being lied to about AI,” Palantir CTO Shyam Sankar warns in the opening line of his new Fox News op-ed. And one of the biggest lies, he said, is that artificial intelligence is coming for Americans’ jobs.
Shyam Sankar, chief technology officer of Palantir Technologies Inc., speaks during the Hill & Valley forum at the U.S. Capitol in Washington, D.C., on Wednesday, April 30, 2025. (Getty Images)
RISKY RETIREMENT: Billionaire Elon Musk recently told people not to worry about “squirreling” money away for retirement because advances in artificial intelligence would supposedly make savings irrelevant in the next 10 to 20 years.
OFF-THE-GRID: Chevron CEO Mike Wirth detailed the company’s strategy to harness U.S. natural resources to meet soaring artificial intelligence power demand — without passing the cost along to consumers.
The COL4 AI-ready data center is located on a seven-acre campus at the convergence point of long-haul fiber and regional carrier fiber networks on July 24, 2025, in Columbus, Ohio. (Eli Hiller/For The Washington Post via Getty Images)
POWER CRISIS NOW: Artificial Intelligence and data centers have been blamed for rising electricity costs across the U.S. In December 2025, American consumers paid 42% more to power their homes than ten years ago.
LATEST POLLING: As the emphasis on implementing artificial intelligence across society grows, voters think the use of AI technology is happening too fast — and they have little confidence the federal government can regulate it properly.
PRIVACY NIGHTMARE: A popular mobile app called Chat & Ask AI has more than 50 million users across the Google Play Store and Apple App Store. Now, an independent security researcher says the app exposed hundreds of millions of private chatbot conversations online.
CAP-EX SURGE: Alphabet executives struck a confident tone on Wednesday’s post-earnings call, signaling that Google’s heavy investments in artificial intelligence are now translating into real revenue growth across the business.
Google Headquarters is seen in Mountain View, California, on May 15, 2023. (Tayfun Coskun/Anadolu Agency via Getty Images)
MERIT OVER FEAR: Shyam Sankar, the chief technology officer and executive vice president of Palantir Technologies, told Fox News Digital that artificial intelligence will be a “massively meritocratic force” within the workplace and offered advice to corporate leaders on how to best position their companies and employees for success.
FAKE LOVE HEIST: A woman named Abigail believed she was in a romantic relationship with a famous actor. The messages felt real. The voice sounded right. The video looked authentic. And the love felt personal. By the time her family realized what was happening, more than $81,000 was gone — and so was the paid-off home she planned to retire in.
FOLLOW FOX NEWS ON SOCIAL MEDIA
Facebook
Instagram
YouTube
X
LinkedIn
SIGN UP FOR OUR OTHER NEWSLETTERS
Fox News First
Fox News Opinion
Fox News Lifestyle
Fox News Health
DOWNLOAD OUR APPS
Fox News
Fox Business
Fox Weather
Fox Sports
Tubi
WATCH FOX NEWS ONLINE
Fox News Go
STREAM FOX NATION
Fox Nation
Stay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.
Technology
Google is expanding AirDrop support to more Android devices ‘very soon’
After introducing AirDrop support to Pixel 10 devices last year, Google is now set to expand it to phones made by other Android partners. Eric Kay, vice president of engineering for Android, confirmed in a press briefing attended by Android Authority that “a lot more” Android devices will be able to use Quick Share to initiate AirDrop sessions with Apple devices this year.
“We spent a lot of time and energy to make sure that we could build something that was compatible not only with iPhone but iPads and MacBooks,” Kay said. “Now that we’ve proven it out, we’re working with our partners to expand it into the rest of the ecosystem, and you should see some exciting announcements coming very soon.”
Currently, Google’s Pixel 10 phones are the only Android devices that can use Quick Share — Android’s own wireless peer-to-peer transfer feature, previously known as Nearby Share — to communicate directly with Apple’s AirDrop. Google hasn’t outlined any specific Android partners or devices for the update yet, but both Nothing and chipmaker Qualcomm teased in November that support was coming.
Kay also discussed Google’s efforts to improve the process for iOS users who switch to Android, helping to prevent incomplete data transfers, lost messages, and other issues. Apple has been working on a “user-friendly” way of transferring data from iPhones to other devices since early 2024, and Google and Apple’s collaborative efforts were seen being tested in Android Canary 2512 for Pixel devices in December.
“We’re also going to be working to make it easy for people who do decide to switch to transfer their data and make sure they’ve got everything they had from their old phone,” Kay said during the same briefing. “So there’s a lot more going on with that.”
Technology
Millions of AI chat messages exposed in app data leak
NEWYou can now listen to Fox News articles!
A popular mobile app called Chat & Ask AI has more than 50 million users across the Google Play Store and Apple App Store. Now, an independent security researcher says the app exposed hundreds of millions of private chatbot conversations online.
The exposed messages reportedly included deeply personal and disturbing requests. Users asked questions like how to painlessly kill themselves, how to write suicide notes, how to make meth and how to hack other apps.
These were not harmless prompts. They were full chat histories tied to real users.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
HOW TECH IS BEING USED IN NANCY GUTHRIE DISAPPEARANCE INVESTIGATION
Security researchers say Chat & Ask AI exposed hundreds of millions of private chatbot messages, including complete conversation histories tied to real users. (Neil Godwin/Getty Images)
What exactly was exposed
The issue was discovered by a security researcher who goes by Harry. He found that Chat & Ask AI had a misconfigured backend using Google Firebase, a popular mobile app development platform. Because of that misconfiguration, it was easy for outsiders to gain authenticated access to the app’s database. Harry says he was able to access roughly 300 million messages tied to more than 25 million users. He analyzed a smaller sample of about 60,000 users and more than one million messages to confirm the scope.
The exposed data reportedly included:
- Full chat histories with the AI
- Timestamps for each conversation
- The custom name users gave the chatbot
- How users configured the AI model
- Which AI model was selected
That matters because many users treat AI chats like private journals, therapists or brainstorming partners.
How this AI app stores so much sensitive user data
Chat & Ask AI is not a standalone artificial intelligence model. It acts as a wrapper that lets users talk to large language models built by bigger companies. Users could choose between models from OpenAI, Anthropic and Google, including ChatGPT, Claude and Gemini. While those companies operate the underlying models, Chat & Ask AI handles the storage. That is where things went wrong. Cybersecurity experts say this type of Firebase misconfiguration is a well-known weakness. It is also easy to find if someone knows what to look for.
We reached out to Codeway, which publishes the Chat & Ask AI app, for comment, but did not receive a response before publication.
149 MILLION PASSWORDS EXPOSED IN MASSIVE CREDENTIAL LEAK
The exposed database reportedly included timestamps, model settings and the names users gave their chatbots, revealing far more than isolated prompts. (Elisa Schu/Getty Images)
Why this matters to everyday users
Many people assume their chats with AI tools are private. They type things they would never post publicly or even say out loud. When an app stores that data insecurely, it becomes a gold mine for attackers. Even without names attached, chat histories can reveal mental health struggles, illegal behavior, work secrets and personal relationships. Once exposed, that data can be copied, scraped and shared forever.
YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT
Because the app handled data storage itself, a simple Firebase misconfiguration made sensitive AI chats accessible to outsiders, according to the researcher. (Edward Berthelot/Getty)
Ways to stay safe when using AI apps
You do not need to stop using AI tools to protect yourself. A few informed choices can lower your risk while still letting you use these apps when they are helpful.
1) Be mindful of sensitive topics
AI chats can feel private, especially when you are stressed, curious or looking for answers. However, not all apps handle conversations securely. Before sharing deeply personal struggles, medical concerns, financial details or questions that could create legal risk if exposed, take time to understand how the app stores protects your data. If those protections are unclear, consider safer alternatives such as trusted professionals or services with stronger privacy controls.
2) Research the app before installing
Look beyond download counts and star ratings. Check who operates the app, how long it has been available, and whether its privacy policy clearly explains how user data is stored and protected.
3) Assume conversations may be stored
Even when an app claims privacy, many AI tools log conversations for troubleshooting or model improvement. Treat chats as potentially permanent records rather than temporary messages.
4) Limit account linking and sign-ins
Some AI apps allow you to sign in with Google, Apple, or an email account. While convenient, this can directly connect chat histories to your real identity. When possible, avoid linking AI tools to primary accounts used for work, banking or personal communication.
5) Review app permissions and data controls
AI apps may request access beyond what is required to function. Review permissions carefully and disable anything that is not essential. If the app offers options to delete chat history, limit data retention or turn off syncing, enable those settings.
6) Use a data removal service
Your digital footprint extends beyond AI apps. Anyone can find personal details about you with a simple Google search, including your phone number, home address, date of birth and Social Security number. Marketers buy this information to target ads. In more serious cases, scammers and identity thieves breach data brokers, leaving personal data exposed or circulating on the dark web. Using a data removal service helps reduce what can be linked back to you if a breach occurs.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Kurt’s key takeaways
AI chat apps are moving fast, but security is still lagging behind. This incident shows how a single configuration mistake can expose millions of deeply personal conversations. Until stronger protections become standard, you need to treat AI chats with caution and limit what you share. The convenience is real, but so is the risk.
Do you assume your AI chats are private, or has this story changed how much you are willing to share with these apps? Let us know your thoughts by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Copyright 2026 CyberGuy.com. All rights reserved.
-
Indiana5 days ago13-year-old rider dies following incident at northwest Indiana BMX park
-
Massachusetts6 days agoTV star fisherman, crew all presumed dead after boat sinks off Massachusetts coast
-
Tennessee6 days agoUPDATE: Ohio woman charged in shooting death of West TN deputy
-
Indiana4 days ago13-year-old boy dies in BMX accident, officials, Steel Wheels BMX says
-
Politics1 week agoVirginia Democrats seek dozens of new tax hikes, including on dog walking and dry cleaning
-
Politics3 days agoTrump unveils new rendering of sprawling White House ballroom project
-
Austin, TX1 week ago
TEA is on board with almost all of Austin ISD’s turnaround plans
-
Texas6 days agoLive results: Texas state Senate runoff