Technology
Grok AI scandal sparks global alarm over child safety
NEWYou can now listen to Fox News articles!
Grok, the built-in chatbot on X, is facing intense scrutiny after acknowledging it generated and shared an AI image depicting two young girls in sexualized attire.
In a public post on X, Grok admitted the content “violated ethical standards” and “potentially U.S. laws on child sexual abuse material (CSAM).” The chatbot added, “It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”
That admission alone is alarming. What followed revealed a far broader pattern.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN
The fallout from this incident has triggered global scrutiny, with governments and safety groups questioning whether AI platforms are doing enough to protect children. (Silas Stein/picture alliance via Getty Images)
Grok quietly restricts image tools to paying users after backlash
As criticism mounted, Grok confirmed it has begun limiting image generation and editing features to paying subscribers only. In a late-night reply on X, the chatbot stated that image tools are now locked behind a premium subscription, directing users to sign up to regain access.
The apology that raised more questions
Grok’s apology appeared only after a user prompted the chatbot to write a heartfelt explanation for people lacking context. In other words, the system did not proactively address the issue. It responded because someone asked it to.
Around the same time, researchers and journalists uncovered widespread misuse of Grok’s image tools. According to monitoring firm Copyleaks, users were generating nonconsensual, sexually manipulated images of real women, including minors and well-known figures.
After reviewing Grok’s publicly accessible photo feed, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute, based on images involving real people with no clear indication of consent. The firm says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.
Copyleaks CEO and co-founder Alon Yamin said, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”
PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS
Grok admitted it generated and shared an AI image that violated ethical standards and may have broken U.S. child protection laws. (Kurt “CyberGuy” Knutsson)
Sexualized images of minors are illegal
This is not a gray area. Generating or distributing sexualized images of minors is a serious criminal offense in the United States and many other countries. Under U.S. federal law, such content is classified as child sexual abuse material. Penalties can include five to 20 years in prison, fines up to $250,000 and mandatory sex offender registration. Similar laws apply in the U.K. and France.
In 2024, a Pennsylvania man received nearly eight years in prison for creating and possessing deepfake CSAM involving child celebrities. That case set a clear precedent. Grok itself acknowledged this legal reality in its post, stating that AI images depicting minors in sexualized contexts are illegal.
The scale of the problem is growing fast
A July report from the Internet Watch Foundation, a nonprofit that tracks and removes child sexual abuse material online, shows how quickly this threat is accelerating. Reports of AI-generated child sexual abuse imagery jumped by 400% in the first half of 2025 alone. Experts warn that AI tools lower the barrier to potential abuse. What once required technical skill or access to hidden forums can now happen through a simple prompt on a mainstream platform.
Real people are being targeted
The harm is not abstract. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X. In multiple documented cases, Grok fully complied. Even more disturbing, users targeted images of a 14-year-old actress Nell Fisher from the Netflix series “Stranger Things.” Grok later admitted there were isolated cases in which users received images depicting minors in minimal clothing. In another Reuters investigation, a Brazilian musician described watching AI-generated bikini images of herself spread across X after users prompted Grok to alter a harmless photo. Her experience mirrors what many women and girls are now facing.
Governments respond worldwide
The backlash has gone global. In France, multiple ministers referred X to an investigative agency over possible violations of the EU’s Digital Services Act, which requires platforms to prevent and mitigate the spread of illegal content. Violations can trigger heavy fines. In India, the country’s IT ministry gave xAI 72 hours to submit a report detailing how it plans to stop the spread of obscene and sexually explicit material generated by Grok. Grok has also warned publicly that xAI could face potential probes from the Department of Justice or lawsuits tied to these failures.
LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION
Researchers later found Grok was widely used to create nonconsensual, sexually altered images of real women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)
Concerns grow over Grok’s safety and government use
The incident raises serious concerns about online privacy, platform security and the safeguards designed to protect minors.
Elon Musk, the owner of X and founder of xAI, had not offered a public response at the time of publication. That silence comes at a sensitive time. Grok has been authorized for official government use under an 18-month federal contract. This approval was granted despite objections from more than 30 consumer advocacy groups that warned the system lacked proper safety testing.
Over the past year, Grok has been accused by critics of spreading misinformation about major news events, promoting antisemitic rhetoric and sharing misleading health information. It also competed directly with tools like ChatGPT and Gemini while operating with fewer visible safety restrictions. Each controversy raises the same question. Can a powerful AI tool be deployed responsibly without strong oversight and enforcement?
What parents and users should know
If you encounter sexualized images of minors or other abusive material online, report it immediately. In the United States, you can contact the FBI tip line or seek help from the National Center for Missing & Exploited Children.
Do not download, share, screenshot or interact with the content in any way. Even viewing or forwarding illegal material can expose you to serious legal risk.
Parents should also talk with children and teens about AI image tools and social media prompts. Many of these images are created through casual requests that do not feel dangerous at first. Teaching kids to report content, close the app and tell a trusted adult can stop harm from spreading further.
Platforms may fail. Safeguards may lag. But early reporting and clear conversations at home remain one of the most effective ways to protect children online.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
The Grok scandal highlights a dangerous reality. As AI spreads faster, these systems amplify harm at an unprecedented scale. When safeguards fail, real people suffer, and children face serious risk. At the same time, trust cannot depend on apologies issued after harm occurs. Instead, companies must earn trust through strong safety design, constant monitoring and real accountability when problems emerge.
Should any AI system be approved for government or mass public use before it proves it can reliably protect children and prevent abuse? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
YouTube’s mobile app finally lets you share timestamped videos
YouTube is making some changes that might affect how you share videos from the mobile app. From the app, you can finally share videos from a specific timestamp, which will make it easier to point someone to a part of a video you might want them to see while you’re on your phone. However, this change will replace the Clips feature that lets you make a shareable clip from a video.
You’ll still be able to watch any Clips that you’ve already made. But moving forward, “the ability to set an end time or include a custom description when sharing will no longer be available,” YouTube says. The company notes that while clipping is “important way for creators to reach new audiences,” it says that “a number of third-party tools with advanced clipping features and authorized creator programs are now available to do this across different video platforms.”
The company originally introduced the Clips feature in 2021.
Technology
Meta employee accused of accessing private images
NEWYou can now listen to Fox News articles!
When you upload a photo to Facebook, you expect it to stay private unless you decide otherwise. That expectation just took a hit after a former employee of Meta was accused of accessing thousands of private images.
According to details confirmed by the company, the London-based employee allegedly created a program to bypass internal safeguards. Investigators say this may have allowed access to about 30,000 private Facebook images that were not meant to be viewed.
The individual is now under criminal investigation and is out on bail as authorities continue to review the case. Here’s how investigators say the access may have happened.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com, trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
META SMART GLASSES PRIVACY CONCERNS GROW
A former Meta employee is accused of accessing thousands of private Facebook images, raising new concerns about how user data is protected. (Fabian Sommer/picture alliance via Getty Images)
How the Meta employee allegedly accessed private images
Authorities believe the employee may have written a script to get around Meta’s internal detection systems. In simple terms, the system that should flag unusual behavior may not have caught the activity right away. This detail matters because large tech platforms rely on monitoring tools to detect suspicious access patterns. When those checks are bypassed, it raises questions about how internal access is controlled.
The investigation is being handled by the cybercrime unit of the Metropolitan Police in London. At the same time, security experts often point out that insider threats are difficult to eliminate. Even strong systems can be tested when someone inside the company misuses access.
What Meta says about the employee investigation
Meta says it discovered the improper access more than a year ago and took action after identifying the issue.
“Protecting user data is our top priority,” a Meta spokesperson told CyberGuy. “After discovering improper access by an employee over a year ago, we immediately terminated the individual, notified users, referred the matter to law enforcement and enhanced our security measures. We are cooperating with the ongoing investigation.”
Legal risks in the Meta private images case
Data protection experts say cases like this often come down to both intent and safeguards. If an employee accesses personal data without authorization, that can lead to criminal charges under data protection and computer misuse laws. However, the company’s responsibility depends on the protections it had in place. If proper safeguards existed, the focus usually remains on the individual.
If not, regulators may consider penalties or legal claims against the company. The Information Commissioner’s Office, the U.K.’s data privacy watchdog, has acknowledged the incident. The agency stressed that social media users should be able to trust how their personal information is handled.
Why the Meta investigation is drawing attention now
This case is unfolding at a time when scrutiny of major tech platforms is already high. Recent legal challenges have raised broader concerns about how companies protect users and manage risk. That context adds weight to this investigation. It reflects a larger conversation about privacy and accountability in the tech industry. As more people rely on digital platforms, expectations of data protection continue to rise. Incidents like this tend to reinforce those concerns.
META REPORTEDLY BUILDING AN AI VERSION OF MARK ZUCKERBERG TO INTERACT WITH COMPANY EMPLOYEES
Mark Zuckerberg walks through the U.S. Capitol after a meeting on March 26, 2026. Investigators in London say a former Meta employee may have used a script to bypass safeguards and view about 30,000 private Facebook images. (Tom Williams/CQ-Roll Call, Inc via Getty Images)
Simple ways to protect your private photos
Even though this case involves an insider, there are still simple steps you can take to better protect your photos and limit who can see them.
1) Check your Facebook privacy settings
You cannot control what happens inside a company, but you can limit how much of your personal content is exposed. Start by reviewing your Facebook privacy settings.
(Settings may vary depending on device and app version)
Mobile (iPhone/Android):
Facebook: Menu > Settings & privacy > Settings > Audience and visibility > Posts > Who can see your future posts > select Friends (or a custom audience) > Save
Desktop (Mac/PC):
Facebook: Profile picture (top right) > Settings & privacy > Settings > Audience and visibility section > Posts > Who can see your future posts > select Friends (or a custom audience) > Done
2) Review older photos and albums
Next, go through older photos and albums. Many people forget that photos shared years ago may still be visible under outdated settings.
(Settings may vary depending on device and app version)
Mobile (iPhone/Android):
Facebook: Menu > Settings & privacy > Settings > Audience and visibility > Posts > Limit who can see past posts > Limit who can see past posts > Limit past posts > confirm
Desktop (Mac/PC):
Facebook: Profile picture > Settings & privacy > Settings > Audience and visibility section > Posts > Limit who can see past posts > Limit past posts > confirm
And check individual albums:
Mobile (iPhone/Android):
Facebook: Go to your profile > Photos > Albums > select an album > tap Edit (top right) > Who can see this? > choose who can see it > Done
Desktop (Mac/PC):
Facebook: click your name on the left > Photos > Albums > select an album > click the three dots > Edit album > choose who can see it > Done
Not all albums can be changed, and some system albums have limited privacy options.
3) Be careful what you upload
It also helps to limit what you upload in the first place. Sensitive images, documents or anything you would not want widely seen may be better kept off social platforms entirely.
META AI EDITS YOUR CAMERA ROLL FOR BETTER FACEBOOK POSTS
Authorities are investigating whether a former Meta employee improperly accessed private Facebook photos that users never intended to share. (Gabby Jones/Bloomberg via Getty Images)
4) Turn on account activity alerts and two-factor authentication
You can also enable alerts for unusual account activity. While this case involves an insider, account alerts still help you spot unauthorized access to your own profile. You can also turn on two-factor authentication (2FA) to add another layer of protection to your account.
How to turn on account activity alerts
(Settings may vary depending on device and app version)
Mobile (iPhone/Android):
Facebook: Menu > Settings & privacy > Settings > Accounts Center > Password and security > Security Checkup > review and complete recommended security steps
Desktop (Mac/PC):
Facebook: Profile picture (top right) > Settings & privacy > Settings > Accounts Center > Password and security > Security Checkup > review and complete recommended security steps
How to turn on two-factor authentication
(Settings may vary depending on device and app version)
Mobile (iPhone/Android):
Facebook: Menu > Settings & privacy > Settings > Password and security > Two-factor authentication > choose text message or authentication app > follow prompts
Desktop (Mac/PC):
Facebook: Profile picture > Settings & privacy > Settings > Password and security > Two-factor authentication > choose text message or authentication app > follow prompts
5) Check third-party app access
Take a few minutes to review which apps have access to your Facebook account. Third-party apps can sometimes hold more access than you expect.
(Settings may vary depending on device and app version)
Mobile (iPhone/Android):
Facebook: Menu > Settings & privacy > Settings > Apps and websites > Active > tap an app > Remove
Desktop (Mac/PC):
Facebook: Profile picture (top right) > Settings & privacy > Settings > Apps and websites > Active > click an app > Remove
If you don’t see any apps listed or options like “Active,” it likely means you don’t have any connected apps to review.
What this means to you
If you use Facebook or similar platforms, this situation highlights something many people overlook. Even with strong safeguards, insider access still exists. Employees often need certain permissions to keep systems running. That creates a level of trust between users and the company.
When that trust is broken, it can feel personal. At the same time, there are still steps you can take on your end. Reviewing your privacy settings, limiting what you share and enabling security features can reduce how much of your content is exposed. It also shows why detection and response matter.
In this case, Meta says it identified the issue, removed the employee and notified users. Those steps can limit damage, but they do not erase the concern. The bigger takeaway is that privacy depends on both technology and human behavior. Systems can reduce risk, but they cannot remove it completely.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
This case is still under investigation, and no final legal outcome has been announced. Even so, it highlights a risk many people rarely think about. Most privacy conversations focus on hackers. This situation is different. It shows how access from inside a company can create its own set of risks. Meta says it acted quickly by removing the employee, notifying users and strengthening its systems. Those steps matter, but they also show how much trust users place in the platforms they use every day. The reality is simple. Once you upload something online, you are trusting more than just the technology behind it.
If someone inside a company can access private data, how much control do you really have over what you share online? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Govee’s new LED Lightwall comes with its own self-standing frame
Govee has announced an upgraded version of its hanging Curtain Lights Pro that can instead be used nearly anywhere you have access to an outlet or large battery. At $449.99, Govee’s new Lightwall is more than twice as expensive as the $199.99 Curtain Lights Pro, but comes with more LEDs in a denser array and a self-standing aluminum frame that can be assembled in 10 to 15 minutes without the need for any tools.
When hung from its stand the Lightwall measures 7.9 feet wide and 5.3 feet tall and features 1,536 color-changing LEDs spaced about 1.96 inches apart in a 48 x 32 grid. It’s water-resistant, and with the ability to refresh at up to 35fps the Lightwall almost sounds like it could be used as a personal backyard Jumbotron, but it’s not designed for watching TV or movies.
The Lightwall instead connects to Govee’s Home app where you can select from over 200 preset scenes and simple animations, choose from 10 different music modes that generate lighting patterns matched to beats, or synchronize its colors to other Govee lighting products to create a cohesive mood.
The app can also use AI to create custom animated GIFs from simple text prompts, or you can take matters into your own hands and create custom designs by sketching in the app with your finger and stacking up to 30 layers of doodles. The Lightwall is smart home compatible and supports Matter, too, so in addition to managing it through Govee’s app you can control it using voice commands through smart devices with Google Assistant or Amazon Alexa.
-
Ohio2 days ago‘Little Rascals’ star Bug Hall arrested in Ohio
-
Georgia1 week agoGeorgia House Special Runoff Election 2026 Live Results
-
Arkansas6 days agoArkansas TV meteorologist Melinda Mayo retires after nearly four decades on air
-
Austin, TX1 week agoABC Kite Fest Returns to Austin for Annual Celebration – Austin Today
-
Austin, TX1 week agoAaliyah Crump plans to transfer from Texas: report
-
Politics2 days agoDem fundraising giant in the hot seat as GOP lawmakers demand answers over dodged subpoena
-
Politics5 days agoTrump blasts Spanberger ahead of Virginia meetings, says state faces tax base exodus like New York, California
-
Health1 week agoWoman discovers missing nose ring traveled to her lungs, causing month-long cough