Connect with us

Technology

Grok AI scandal sparks global alarm over child safety

Published

on

Grok AI scandal sparks global alarm over child safety

NEWYou can now listen to Fox News articles!

Grok, the built-in chatbot on X, is facing intense scrutiny after acknowledging it generated and shared an AI image depicting two young girls in sexualized attire.

In a public post on X, Grok admitted the content “violated ethical standards” and “potentially U.S. laws on child sexual abuse material (CSAM).” The chatbot added, “It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”

That admission alone is alarming. What followed revealed a far broader pattern.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

The fallout from this incident has triggered global scrutiny, with governments and safety groups questioning whether AI platforms are doing enough to protect children.  (Silas Stein/picture alliance via Getty Images)

Grok quietly restricts image tools to paying users after backlash

As criticism mounted, Grok confirmed it has begun limiting image generation and editing features to paying subscribers only. In a late-night reply on X, the chatbot stated that image tools are now locked behind a premium subscription, directing users to sign up to regain access.

The apology that raised more questions

Grok’s apology appeared only after a user prompted the chatbot to write a heartfelt explanation for people lacking context. In other words, the system did not proactively address the issue. It responded because someone asked it to.

Around the same time, researchers and journalists uncovered widespread misuse of Grok’s image tools. According to monitoring firm Copyleaks, users were generating nonconsensual, sexually manipulated images of real women, including minors and well-known figures.

Advertisement

After reviewing Grok’s publicly accessible photo feed, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute, based on images involving real people with no clear indication of consent. The firm says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.

Copyleaks CEO and co-founder Alon Yamin said, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”

PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS

Grok admitted it generated and shared an AI image that violated ethical standards and may have broken U.S. child protection laws. (Kurt “CyberGuy” Knutsson)

Sexualized images of minors are illegal

This is not a gray area. Generating or distributing sexualized images of minors is a serious criminal offense in the United States and many other countries. Under U.S. federal law, such content is classified as child sexual abuse material. Penalties can include five to 20 years in prison, fines up to $250,000 and mandatory sex offender registration. Similar laws apply in the U.K. and France.

Advertisement

In 2024, a Pennsylvania man received nearly eight years in prison for creating and possessing deepfake CSAM involving child celebrities. That case set a clear precedent. Grok itself acknowledged this legal reality in its post, stating that AI images depicting minors in sexualized contexts are illegal.

The scale of the problem is growing fast

A July report from the Internet Watch Foundation, a nonprofit that tracks and removes child sexual abuse material online, shows how quickly this threat is accelerating. Reports of AI-generated child sexual abuse imagery jumped by 400% in the first half of 2025 alone. Experts warn that AI tools lower the barrier to potential abuse. What once required technical skill or access to hidden forums can now happen through a simple prompt on a mainstream platform.

Real people are being targeted

The harm is not abstract. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X. In multiple documented cases, Grok fully complied. Even more disturbing, users targeted images of a 14-year-old actress Nell Fisher from the Netflix series “Stranger Things.” Grok later admitted there were isolated cases in which users received images depicting minors in minimal clothing. In another Reuters investigation, a Brazilian musician described watching AI-generated bikini images of herself spread across X after users prompted Grok to alter a harmless photo. Her experience mirrors what many women and girls are now facing.

Governments respond worldwide

The backlash has gone global. In France, multiple ministers referred X to an investigative agency over possible violations of the EU’s Digital Services Act, which requires platforms to prevent and mitigate the spread of illegal content. Violations can trigger heavy fines. In India, the country’s IT ministry gave xAI 72 hours to submit a report detailing how it plans to stop the spread of obscene and sexually explicit material generated by Grok. Grok has also warned publicly that xAI could face potential probes from the Department of Justice or lawsuits tied to these failures.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

Advertisement

Researchers later found Grok was widely used to create nonconsensual, sexually altered images of real women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)

Concerns grow over Grok’s safety and government use

The incident raises serious concerns about online privacy, platform security and the safeguards designed to protect minors.

Elon Musk, the owner of X and founder of xAI, had not offered a public response at the time of publication. That silence comes at a sensitive time. Grok has been authorized for official government use under an 18-month federal contract. This approval was granted despite objections from more than 30 consumer advocacy groups that warned the system lacked proper safety testing.

Over the past year, Grok has been accused by critics of spreading misinformation about major news events, promoting antisemitic rhetoric and sharing misleading health information. It also competed directly with tools like ChatGPT and Gemini while operating with fewer visible safety restrictions. Each controversy raises the same question. Can a powerful AI tool be deployed responsibly without strong oversight and enforcement?

What parents and users should know

If you encounter sexualized images of minors or other abusive material online, report it immediately. In the United States, you can contact the FBI tip line or seek help from the National Center for Missing & Exploited Children.

Advertisement

Do not download, share, screenshot or interact with the content in any way. Even viewing or forwarding illegal material can expose you to serious legal risk.

Parents should also talk with children and teens about AI image tools and social media prompts. Many of these images are created through casual requests that do not feel dangerous at first. Teaching kids to report content, close the app and tell a trusted adult can stop harm from spreading further.

Platforms may fail. Safeguards may lag. But early reporting and clear conversations at home remain one of the most effective ways to protect children online.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com       

Advertisement

Kurt’s key takeaways

The Grok scandal highlights a dangerous reality. As AI spreads faster, these systems amplify harm at an unprecedented scale. When safeguards fail, real people suffer, and children face serious risk. At the same time, trust cannot depend on apologies issued after harm occurs. Instead, companies must earn trust through strong safety design, constant monitoring and real accountability when problems emerge.

Should any AI system be approved for government or mass public use before it proves it can reliably protect children and prevent abuse? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Advertisement

Copyright 2025 CyberGuy.com.  All rights reserved.

Technology

Jury finds Elon Musk’s ‘stupid tweets’ caused Twitter investors’ losses

Published

on

Jury finds Elon Musk’s ‘stupid tweets’ caused Twitter investors’ losses

A California jury determined that Elon Musk misled Twitter investors before making a $44 billion deal to buy the company in 2022, reports CNBC. The New York Times reports that Musk had testified this month that he didn’t believe his posts would spook markets, but he did say that “If this was a trial about whether I made stupid tweets, I would say I’m guilty.”

CNBC reports Musk’s attorneys are expected to file an appeal, as damages could reach as high as $2.6 billion, according to attorneys representing the plaintiffs.

While finding that Musk did not engage in a specific scheme to defraud shareholders, the jury cited two of Musk’s tweets, from May 13th and May 27th, 2022, as materially false or misleading, causing some investors to sell shares in Twitter at values below the $54.20 per share bid.

Twitter deal temporarily on hold pending details supporting calculation that spam/fake accounts do indeed represent less than 5% of users

20% fake/spam accounts, while 4 times what Twitter claims, could be *much* higher.

My offer was based on Twitter’s SEC filings being accurate.

Yesterday, Twitter’s CEO publicly refused to show proof of

Advertisement

This deal cannot move forward until he does.

Continue Reading

Technology

AI smart glasses could generate fake photos instantly

Published

on

AI smart glasses could generate fake photos instantly

NEWYou can now listen to Fox News articles!

Smart glasses are gaining new momentum thanks to artificial intelligence (AI). Companies like Google, Meta, Samsung and possibly Apple are exploring AI-powered glasses that combine cameras, speakers, voice assistants and computer vision in a wearable device.

At first glance, the features sound familiar. Smart glasses can take photos, give directions, answer questions and help you navigate the world hands-free. However, a recent demo hints at something much bigger.

These glasses may soon generate or alter photos instantly. In other words, the image you capture may no longer reflect what was actually there.

That raises an important question: If AI can change a photo the moment it is taken, how do we know what is real anymore?

Advertisement

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

SMART GLASSES DETECTOR APP WARNS IF YOU’RE BEING RECORDED

Google product lead Dieter Bohn demonstrates prototype AI smart glasses during a demo showing how the device can capture and modify photos using generative AI. (X/ @backlon)

A new AI trick inside smart glasses

During a demo of upcoming smart glasses, Google’s Dieter Bohn showed how the device could capture a photo and modify it using AI. The prototype, shown as Android XR glasses with a display, connects to Google’s generative AI tools, including Google Gemini and an experimental image generator called Nano Banana.

Advertisement

In the demonstration, Bohn asked the glasses to take a photo of people in the room. Then he gave another command. He asked the system to place those people in front of the famous church in Barcelona that he could not remember by name.

Within moments, the AI produced a new image showing the group standing in front of the Sagrada Família. The people in the photo never traveled to Spain. The background came from AI. To someone viewing the image later, it could look like a real travel photo.

Smart glasses are following the same playbook

The hardware approach behind these devices looks similar across the industry.

Most smart glasses include:

  • A built-in camera
  • Speakers for audio feedback
  • A microphone and a voice assistant
  • Computer vision powered by AI
  • Navigation and contextual information
  • Optional displays inside the lenses

This design mirrors products like the Ray-Ban Meta Smart Glasses, which combine sunglasses with an AI assistant and camera. Those glasses already allow users to capture photos, livestream video and ask questions using voice commands. However, the editing tools currently available inside Meta’s glasses focus more on artistic effects. For example, the system can transform photos into a cartoon or painting style. The goal is creative expression rather than photorealistic manipulation. Google’s demo hints at something different. It shows how AI can place people into entirely new scenes that never happened.

INSIDE MICROSOFT’S AI CONTENT VERIFICATION PLAN

Advertisement

A close-up of prototype Android XR glasses with a built-in display, part of Google’s concept for AI-powered smart glasses. (X/ @backlon)

Why this matters for photography

AI-generated images already exist across social media. Smartphones have also introduced powerful editing tools. Google’s Pixel phones, for example, have leaned heavily into AI photography with tools that remove objects, adjust lighting and generate backgrounds.

The difference with smart glasses is speed. The technology removes the delay between taking a photo and editing it. Instead of capturing an image and opening editing software later, the AI can change the photo immediately. That could make altered images far more common. Photos that once served as proof of where someone was or what happened may become harder to trust.

The demo still leaves open questions

It is important to note that the Google demo was short and carefully staged. The company acknowledged that parts of the video were edited. That suggests the AI process may take longer in real-world conditions.

There is also the question of reliability. Generative AI tools sometimes produce mistakes, strange artifacts or unrealistic details. Still, even an imperfect system could change how people interact with cameras and images. As the technology improves, the gap between real and AI-generated photos may shrink.

Advertisement

What this means for you

Smart glasses could soon become another everyday device. That means the way we capture and share images may shift again. If these tools become common, you may start seeing photos that were generated or heavily modified by AI. A picture posted online may look like a real moment from someone’s life. In reality, it could be a mix of real people and AI-generated scenery. That does not mean every image is fake. It does mean digital images may carry less proof than they once did. Understanding how AI editing works can help you approach viral photos, travel shots or dramatic images with a healthy level of skepticism.

Ray-Ban Meta smart glasses combine cameras, speakers and an AI assistant, showing how wearable devices are bringing artificial intelligence into everyday eyewear.  (Meta)

How to spot AI-generated or altered photos

AI editing tools are becoming easier to use. That means altered images may appear more often online. A few habits can help you avoid being misled.

1) Question images that look too perfect

If a photo looks unusually polished or dramatic, pause before assuming it is real. AI images often create scenes that feel cinematic or unusually clean.

2) Look closely at small details

AI systems sometimes struggle with small elements. Check hands, reflections, shadows and background objects for strange shapes or mismatched lighting.

Advertisement

3) Check where the image came from

If a photo spreads quickly online, try to trace the original source. Reverse image search can reveal if the picture appeared somewhere else first.

4) Be cautious with viral travel or event photos

AI tools can place people into locations they have never visited. A convincing background does not guarantee that the moment actually happened.

5) Watch for photos used in scams or misinformation

AI-generated images can appear in fake travel posts, romance scams or misleading news claims. If a photo appears alongside urgent requests for money or emotional stories, take time to verify it before reacting. Avoid clicking suspicious links and consider using strong antivirus software that can block malicious websites and scam pages before they load. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

6) Treat photos online as information, not proof

Photos once served as strong evidence of where someone was or what occurred. With generative AI, an image may be a mix of real people and computer-generated scenes.

Take my quiz: How safe is your online security?

Advertisement

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com

Kurt’s key takeaways

Smart glasses promise convenience, hands-free computing and powerful AI tools. At the same time, they blur the line between photography and digital creation. Technology keeps pushing toward a world where capturing a moment and generating one can happen in the same instant. The devices themselves may become smaller and smarter. The challenge may be deciding how much we trust the images they produce.

So here is the question worth asking. If AI glasses can create realistic photos of places you’ve never visited, will pictures still count as proof of reality? Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

Microsoft is ending the Windows Update nightmare — and letting you pause them indefinitely

Published

on

Microsoft is ending the Windows Update nightmare — and letting you pause them indefinitely

While Microsoft isn’t doing away with automatic updates entirely, Windows boss Pavan Davuluri is promising that in future, you’ll be able to pause them “for as long as you need.” You’ll be able to reboot or shut down your computer “without being forced to install them.” To be fair to Microsoft, I’ve seen an option to reboot or shutdown without updating for a while now.

Even if you fail to pause them, you’ll only have to reboot your computer once a month, Microsoft promises — though its says you’ll be able to get updates faster if you wish. If you’re the kind of user who wants new features so quickly that you’re part of the Windows Insider Program, Microsoft says it’ll make that easier and make it clearer what you’ll get.

And as part of those updates, Microsoft says that this year, it will improve performance, responsiveness and stability, reduce memory consumption, make File Explorer and other apps launch and run faster, reduce crashes, improve drivers, make devices wake up more reliably, and much, much more.

It feels like Microsoft has also taken our feedback about the recent ridiculous hour-plus setup process for some Windows handhelds and laptops to heart. Davuluri writes that we’ll have “the ability to skip updates during device setup to get to the desktop faster.” And even if you sit through, there should be “fewer pages and reboots to getting started is simpler.” Plus, Microsoft will finally let you use gamepad controls to create your PIN during setup, instead of making you smudge the touchscreen.

Bravo, Microsoft, if this is all true, and if you can implement it in a reasonable length of time.

Advertisement

Davuluri writes that his team has spent months analyzing the feedback of Windows users, and “What came through was the voice of people who care deeply about Windows and want it to be better.”

Continue Reading

Trending