Technology
Company restores AI teddy bear sales after safety scare
NEWYou can now listen to Fox News articles!
FoloToy paused sales of its AI teddy bear Kumma after a safety group found the toy gave risky and inappropriate responses during testing. Now the company says it has restored sales after a week of intense review. It also claims that it improved safeguards to keep kids safe.
The announcement arrived through a social media post that highlighted a push for stronger oversight. The company said it completed testing, reinforced safety modules, and upgraded its content filters. It added that it aims to build age-appropriate AI companions for families worldwide.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
TEXAS FAMILY SUES CHARACTER.AI AFTER CHATBOT ALLEGEDLY ENCOURAGED AUTISTIC SON TO HARM PARENTS AND HIMSELF
FoloToy resumed sales of its AI teddy bear Kumma after a weeklong review prompted by safety concerns. (Kurt “CyberGuy” Knuttson)
Why FoloToy’s AI teddy bear raised safety concerns
The controversy started when the Public Interest Research Group Education Fund tested three different AI toys. All of them produced concerning answers that touched on religion, Norse mythology, and harmful household items.
Kumma stood out for the wrong reasons. When the bear used the Mistral model, it offered tips on where to find knives, pills, and matches. It even outlined steps to light a match and blow it out.
Tests with the GPT-4o model raised even sharper concerns. Kumma gave advice related to kissing and launched into detailed explanations of adult sexual content when prompted. The bear pushed further by asking the young user what they wanted to explore.
Researchers called the behavior unsafe and inappropriate for any child-focused product.
FoloToy paused access to its AI toys
Once the findings became public, FoloToy suspended sales of Kumma and its other AI toys. The company told PIRG that it started a full safety audit across all products.
OpenAI also confirmed that it suspended FoloToy’s access to its models for violating policies designed to protect anyone under 18.
LAWMAKERS UNVEIL BIPARTISAN GUARD ACT AFTER PARENTS BLAME AI CHATBOTS FOR TEEN SUICIDES, VIOLENCE
The company says new safeguards and upgraded filters are now in place to prevent inappropriate responses. (Kurt “CyberGuy” Knutsson)
Why FoloToy restored Kumma’s sales after its safety review
FoloToy brought Kumma back to its online store just one week after suspending sales. The fast return drew attention from parents and safety experts who wondered if the company had enough time to fix the serious issues identified in PIRG’s report.
FoloToy posted a detailed statement on X that laid out its version of what happened. In the post, the company said it viewed child safety as its “highest priority” and that it was “the only company to proactively suspend sales, not only of the product mentioned in the report, but also of our other AI toys.“ FoloToy said it took this action “immediately after the findings were published because we believe responsible action must come before commercial considerations.”
The company also emphasized to CyberGuy that it was the only one of the three AI toy startups in the PIRG review to suspend sales across all of its products and that it made this decision during the peak Christmas sales season, knowing the commercial impact would be significant. FoloToy told us, “Nevertheless, we moved forward decisively, because we believe that responsible action must always come before commercial interests.”
The company also said it took the report’s disturbing examples seriously. According to FoloToy, the issues were “directly addressed in our internal review.” It explained that the team “initiated a deep, company-wide internal safety audit,” then “strengthened and upgraded our content-moderation and child-safety safeguards,” and “deployed enhanced safety rules and protections through our cloud-based system.”
After outlining these steps, the company said it spent the week on “rigorous review, testing, and reinforcement of our safety modules.” It concluded its announcement by saying it “began gradually restoring product sales” as those updated safeguards went live.
FoloToy added that as global attention on AI toy risks grows, “transparency, responsibility and continuous improvement are essential,” and that the company “remains firmly committed to building safe, age-appropriate AI companions for children and families worldwide.”
LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION
Safety testers previously found the toy giving risky guidance about weapons, matches and adult content.
Why experts still question FoloToy’s AI toy safety fixes
PIRG researcher RJ Cross said her team plans to test the updated toys to see if the fixes hold up. She noted that a week feels fast for such significant changes, and only new tests will show if the product now behaves safely.
Parents will want to follow this closely as AI-powered toys grow more common. The speed of FoloToy’s relaunch raises questions about the depth of its review.
Tips for parents before buying AI toys
AI toys can feel exciting and helpful, but they can also surprise you with content you’d never expect. If you plan to bring an AI-powered toy into your home, these simple steps can help you stay in control.
1) Check which AI model the toy uses
Not every model follows the same guardrails. Some include stronger filters while others may respond too freely. Look for transparent disclosures about which model powers the toy and what safety features support it.
2) Read independent reviews
Groups like PIRG often test toys in ways parents cannot. These reviews flag hidden risks and point out behavior you may not catch during quick demos.
3) Set clear usage rules
Keep AI toys in shared spaces where you can hear or see how your child interacts with it. This helps you step in if the toy gives a concerning answer.
4) Test the toy yourself first
Ask the toy questions, try creative prompts, and see how it handles tricky topics. This lets you learn how it behaves before you hand it to your child.
5) Update the toy’s firmware
Many AI toys run on cloud systems. Updates often add stronger safeguards or reduce risky answers. Make sure the device stays current.
6) Check for a clear privacy policy
AI toys can gather voice data, location info, or behavioral patterns. A strong privacy policy should explain what is collected, how long it is stored, and who can access it.
7) Watch for sudden behavior changes
If an AI toy starts giving odd answers or pushes into areas that feel inappropriate, stop using it and report the problem to the manufacturer.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
AI toys can offer fun and learning, but they can also expose kids to unexpected risks. FoloToy says it improved Kumma’s safety, yet experts still want proof. Until the updated toy goes through independent testing, families may want to stay cautious.
Do you think AI toys can ever be fully safe for young kids? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Jury finds Elon Musk’s ‘stupid tweets’ caused Twitter investors’ losses
A California jury determined that Elon Musk misled Twitter investors before making a $44 billion deal to buy the company in 2022, reports CNBC. The New York Times reports that Musk had testified this month that he didn’t believe his posts would spook markets, but he did say that “If this was a trial about whether I made stupid tweets, I would say I’m guilty.”
CNBC reports Musk’s attorneys are expected to file an appeal, as damages could reach as high as $2.6 billion, according to attorneys representing the plaintiffs.
While finding that Musk did not engage in a specific scheme to defraud shareholders, the jury cited two of Musk’s tweets, from May 13th and May 27th, 2022, as materially false or misleading, causing some investors to sell shares in Twitter at values below the $54.20 per share bid.
Twitter deal temporarily on hold pending details supporting calculation that spam/fake accounts do indeed represent less than 5% of users
20% fake/spam accounts, while 4 times what Twitter claims, could be *much* higher.
My offer was based on Twitter’s SEC filings being accurate.
Yesterday, Twitter’s CEO publicly refused to show proof of
This deal cannot move forward until he does.
Technology
AI smart glasses could generate fake photos instantly
NEWYou can now listen to Fox News articles!
Smart glasses are gaining new momentum thanks to artificial intelligence (AI). Companies like Google, Meta, Samsung and possibly Apple are exploring AI-powered glasses that combine cameras, speakers, voice assistants and computer vision in a wearable device.
At first glance, the features sound familiar. Smart glasses can take photos, give directions, answer questions and help you navigate the world hands-free. However, a recent demo hints at something much bigger.
These glasses may soon generate or alter photos instantly. In other words, the image you capture may no longer reflect what was actually there.
That raises an important question: If AI can change a photo the moment it is taken, how do we know what is real anymore?
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
SMART GLASSES DETECTOR APP WARNS IF YOU’RE BEING RECORDED
Google product lead Dieter Bohn demonstrates prototype AI smart glasses during a demo showing how the device can capture and modify photos using generative AI. (X/ @backlon)
A new AI trick inside smart glasses
During a demo of upcoming smart glasses, Google’s Dieter Bohn showed how the device could capture a photo and modify it using AI. The prototype, shown as Android XR glasses with a display, connects to Google’s generative AI tools, including Google Gemini and an experimental image generator called Nano Banana.
In the demonstration, Bohn asked the glasses to take a photo of people in the room. Then he gave another command. He asked the system to place those people in front of the famous church in Barcelona that he could not remember by name.
Within moments, the AI produced a new image showing the group standing in front of the Sagrada Família. The people in the photo never traveled to Spain. The background came from AI. To someone viewing the image later, it could look like a real travel photo.
Smart glasses are following the same playbook
The hardware approach behind these devices looks similar across the industry.
Most smart glasses include:
- A built-in camera
- Speakers for audio feedback
- A microphone and a voice assistant
- Computer vision powered by AI
- Navigation and contextual information
- Optional displays inside the lenses
This design mirrors products like the Ray-Ban Meta Smart Glasses, which combine sunglasses with an AI assistant and camera. Those glasses already allow users to capture photos, livestream video and ask questions using voice commands. However, the editing tools currently available inside Meta’s glasses focus more on artistic effects. For example, the system can transform photos into a cartoon or painting style. The goal is creative expression rather than photorealistic manipulation. Google’s demo hints at something different. It shows how AI can place people into entirely new scenes that never happened.
INSIDE MICROSOFT’S AI CONTENT VERIFICATION PLAN
A close-up of prototype Android XR glasses with a built-in display, part of Google’s concept for AI-powered smart glasses. (X/ @backlon)
Why this matters for photography
AI-generated images already exist across social media. Smartphones have also introduced powerful editing tools. Google’s Pixel phones, for example, have leaned heavily into AI photography with tools that remove objects, adjust lighting and generate backgrounds.
The difference with smart glasses is speed. The technology removes the delay between taking a photo and editing it. Instead of capturing an image and opening editing software later, the AI can change the photo immediately. That could make altered images far more common. Photos that once served as proof of where someone was or what happened may become harder to trust.
The demo still leaves open questions
It is important to note that the Google demo was short and carefully staged. The company acknowledged that parts of the video were edited. That suggests the AI process may take longer in real-world conditions.
There is also the question of reliability. Generative AI tools sometimes produce mistakes, strange artifacts or unrealistic details. Still, even an imperfect system could change how people interact with cameras and images. As the technology improves, the gap between real and AI-generated photos may shrink.
What this means for you
Smart glasses could soon become another everyday device. That means the way we capture and share images may shift again. If these tools become common, you may start seeing photos that were generated or heavily modified by AI. A picture posted online may look like a real moment from someone’s life. In reality, it could be a mix of real people and AI-generated scenery. That does not mean every image is fake. It does mean digital images may carry less proof than they once did. Understanding how AI editing works can help you approach viral photos, travel shots or dramatic images with a healthy level of skepticism.
Ray-Ban Meta smart glasses combine cameras, speakers and an AI assistant, showing how wearable devices are bringing artificial intelligence into everyday eyewear. (Meta)
How to spot AI-generated or altered photos
AI editing tools are becoming easier to use. That means altered images may appear more often online. A few habits can help you avoid being misled.
1) Question images that look too perfect
If a photo looks unusually polished or dramatic, pause before assuming it is real. AI images often create scenes that feel cinematic or unusually clean.
2) Look closely at small details
AI systems sometimes struggle with small elements. Check hands, reflections, shadows and background objects for strange shapes or mismatched lighting.
3) Check where the image came from
If a photo spreads quickly online, try to trace the original source. Reverse image search can reveal if the picture appeared somewhere else first.
4) Be cautious with viral travel or event photos
AI tools can place people into locations they have never visited. A convincing background does not guarantee that the moment actually happened.
5) Watch for photos used in scams or misinformation
AI-generated images can appear in fake travel posts, romance scams or misleading news claims. If a photo appears alongside urgent requests for money or emotional stories, take time to verify it before reacting. Avoid clicking suspicious links and consider using strong antivirus software that can block malicious websites and scam pages before they load. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
6) Treat photos online as information, not proof
Photos once served as strong evidence of where someone was or what occurred. With generative AI, an image may be a mix of real people and computer-generated scenes.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
Smart glasses promise convenience, hands-free computing and powerful AI tools. At the same time, they blur the line between photography and digital creation. Technology keeps pushing toward a world where capturing a moment and generating one can happen in the same instant. The devices themselves may become smaller and smarter. The challenge may be deciding how much we trust the images they produce.
So here is the question worth asking. If AI glasses can create realistic photos of places you’ve never visited, will pictures still count as proof of reality? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Microsoft is ending the Windows Update nightmare — and letting you pause them indefinitely
While Microsoft isn’t doing away with automatic updates entirely, Windows boss Pavan Davuluri is promising that in future, you’ll be able to pause them “for as long as you need.” You’ll be able to reboot or shut down your computer “without being forced to install them.” To be fair to Microsoft, I’ve seen an option to reboot or shutdown without updating for a while now.
Even if you fail to pause them, you’ll only have to reboot your computer once a month, Microsoft promises — though its says you’ll be able to get updates faster if you wish. If you’re the kind of user who wants new features so quickly that you’re part of the Windows Insider Program, Microsoft says it’ll make that easier and make it clearer what you’ll get.
And as part of those updates, Microsoft says that this year, it will improve performance, responsiveness and stability, reduce memory consumption, make File Explorer and other apps launch and run faster, reduce crashes, improve drivers, make devices wake up more reliably, and much, much more.
It feels like Microsoft has also taken our feedback about the recent ridiculous hour-plus setup process for some Windows handhelds and laptops to heart. Davuluri writes that we’ll have “the ability to skip updates during device setup to get to the desktop faster.” And even if you sit through, there should be “fewer pages and reboots to getting started is simpler.” Plus, Microsoft will finally let you use gamepad controls to create your PIN during setup, instead of making you smudge the touchscreen.
Bravo, Microsoft, if this is all true, and if you can implement it in a reasonable length of time.
Davuluri writes that his team has spent months analyzing the feedback of Windows users, and “What came through was the voice of people who care deeply about Windows and want it to be better.”
-
Detroit, MI2 days agoDrummer Brian Pastoria, longtime Detroit music advocate, dies at 68
-
Oklahoma7 days agoFamily rallies around Oklahoma father after head-on crash
-
Nebraska1 week agoWildfire forces immediate evacuation order for Farnam residents
-
Georgia5 days agoHow ICE plans for a detention warehouse pushed a Georgia town to fight back | CNN Politics
-
Massachusetts1 week agoMassachusetts community colleges to launch apprenticeship degree programs – The Boston Globe
-
Alaska6 days agoPolice looking for man considered ‘armed and dangerous’
-
Southwest1 week agoTalarico reportedly knew Colbert interview wouldn’t air on TV before he left to film it
-
Michigan1 week agoMichigan-based Stryker hit with cyberattack