Connect with us

Technology

Jury duty phone scams on the rise as fraudsters impersonate local officials, threaten arrest

Published

on

Jury duty phone scams on the rise as fraudsters impersonate local officials, threaten arrest

NEWYou can now listen to Fox News articles!

Scammers are constantly finding new ways to trick people. While older tactics like phishing emails and impersonating government agencies to steal credentials are becoming easier to spot, bad actors are now turning to more alarming methods. One of the latest involves impersonating local authorities. 

People have reported receiving phone calls claiming they missed jury duty and now face a warrant for their arrest. This kind of impersonation scam is harder to spot because it’s highly personalized, but that doesn’t mean you’re defenseless. Let’s break it down.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

DON’T FALL FOR THIS BANK PHISHING SCAM TRICK

Advertisement

Scammers impersonating local authorities are on the rise, telling victims they missed jury duty and must pay to avoid legal trouble. (Kurt “CyberGuy” Knutsson)

What jury duty scam victims need to know

Scammers posing as court officials are targeting individuals with false claims about missed jury duty, prompting warnings from law enforcement. The fraud typically begins with a call from a blocked or unknown number, alleging that the recipient has missed jury duty and is facing an arrest warrant. The scammers then demand payment, usually through wire transfers or gift cards.

A key warning sign is being asked to pay money to avoid arrest or legal trouble. It is important never to give money or personal information to unknown callers.

These scams often target older or more vulnerable individuals, although younger people have also reported close calls. In one example, a person received repeated calls from an unidentified number before answering. The caller, claiming to be from a local sheriff’s department and equipped with the individual’s full name and address, insisted they had failed to appear for jury duty and faced multiple citations.

HOW FAKE MICROSOFT ALERTS TRICK YOU INTO PHISHING SCAMS

Advertisement

Victims can spot jury duty impersonation scams by verifying suspicious calls before taking action and reducing their digital footprint. (Kurt “CyberGuy” Knutsson)

How to spot jury duty impersonation scams

  • No jury duty arrest warrants: Missing jury duty doesn’t lead to criminal citations or warrants.
  • Blocked or spoofed numbers: Real law enforcement won’t hide their identity.
  • Unusual payment methods: No government agency will ask for gift cards or crypto.
  • Aggressive threats: Threats of arrest or contempt of court are a scare tactic.

Legitimate jury summonses are delivered by mail, not through threatening phone calls.

6 ways to protect yourself from jury duty scam calls

If you get a suspicious call about missed jury duty, don’t panic. Follow these steps to stay safe and protect your personal information.

1) Don’t trust calls from unknown numbers

This might sound obvious, but don’t trust any unknown caller, especially if they demand money. Legitimate authorities will never ask for payment over the phone, especially not through gift cards, wire transfers, or cryptocurrency. If someone threatens you with arrest or legal action unless you pay immediately, it’s almost certainly a scam. Hang up and call your local court or police department using an official number.

HOW TO HAND OFF DATA PRIVACY RESPONSIBILITIES FOR OLDER ADULTS TO A TRUSTED LOVED ONE 

2) Verify suspicious calls before taking action

If you receive a suspicious call, take a breath and fact-check. Court summonses are always delivered by mail, not over the phone. Even if the caller has personal information like your name or address, that doesn’t make them credible. Scammers often use leaked or publicly available data to appear convincing.

Be extra cautious, even if the scam comes through text messages or email. Do not click on any suspicious links, as they can install malware on your device and steal your personal data.

Advertisement

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at CyberGuy.com.

 

3) Reduce your digital footprint to stop scammers

The truth is, your data is already out there, from old social media profiles to past breaches. That’s often how scammers get enough personal details to sound legitimate. Investing in a data removal service can help reduce your digital footprint by scrubbing your information from people-search sites and data brokers.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Advertisement

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

Victims of jury duty phone scams can block and report suspicious numbers to local law enforcement or fraud reporting agencies. (Kurt “CyberGuy” Knutsson)

4) Block and report scam numbers

If you receive a scam call, report it to local law enforcement or your country’s fraud reporting agency. After hanging up, block the number on your phone and report it to:

  • FTC (USA): reportfraud.ftc.gov
  • Local police or sheriff’s office
  • Your phone carrier’s scam call reporting option

Many carriers allow you to forward scam texts to 7726 (SPAM).

5) Use call screening or spam protection apps

Apps like TruecallerHiya, and built-in features like Google Call Screen or Silence Unknown Callers on iPhones can detect and block fake calls automatically.

Advertisement

Pro Tip: Enable your phone’s “silence unknown callers” feature for extra protection. 

6) Talk to vulnerable family members

Older adults are frequent targets. Sit down with your parents, grandparents, or neighbors to explain how these scams work and what to watch for. A simple heads-up could stop a costly mistake.

What this means for you

Scammers are getting bolder and more convincing, but you can stay a step ahead. Knowing the signs of a jury duty phone scam, using smart tools like antivirus and call blockers and limiting your digital footprint can dramatically reduce your risk. Empower yourself and your loved ones with this knowledge.

 

Kurt’s key takeaway

Instead of relying on faceless phishing emails, scammers are now using hyper-personalized and emotionally charged phone calls. By impersonating local authorities and referencing civic duties like jury duty, they exploit both fear and a sense of responsibility. What makes this especially dangerous is how plausible it sounds, drawing on real processes that many people don’t fully understand.

Advertisement

Do you think law enforcement and government agencies are doing enough to educate the public about these scams? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2025 CyberGuy.com. All rights reserved. 

Technology

Fandoms are cashing in on AI deepfakes

Published

on

Fandoms are cashing in on AI deepfakes

Madison Lawrence Tabbey was scrolling through X in late October when a post from a Wicked update account caught her attention. Ariana Grande, who stars in the movies as Glinda, had just liked a meme on Instagram about never wanting to see another AI-generated image again. Grande had also purportedly blocked a fan account that had made AI edits of her.

As Tabbey read through the mostly sympathetic replies, a very different message caught her eye. It was from a fellow Grande fan whose profile was mostly AI edits, showing Grande with different hairstyles and outfits. And, their reply said, they weren’t going to stop. Tabbey, a 33-year-old living in Nashville, Tennessee, couldn’t help but start arguing with them. “Oh so you were SERIOUS when you said you don’t care about poor communities not having water so that you can make AI pictures of ariana grande?” she shot back, referencing data centers draining resources and polluting cities like nearby Memphis. The account fired back at first, but amid a swarm of angry responses, it deactivated a few days later. It seemed like the owner wanted to argue and make people mad, but they might have taken things too far.

Grande is one of many celebrities and influencers who have openly rejected AI media exploiting their likenesses, but who continue to be prominently featured in it anyway, even among people who call themselves fans. As AI images and videos become ever simpler to produce, celebrities are facing down a mix of unsettled social norms and the incentives of an internet attention economy. And on “stan Twitter,” where pop culture accounts have grown into a lucrative fan-made media ecosystem, AI content has emerged as a growing genre, despite — or maybe because of — the outrage it provokes.

“Stan Twitter is very against AI just in general. So this goes against what people believe in, so then they’ll instantly get a comment, they’ll have the AI people retweet it, like it. So it’s just a very quick way to get money,” said Brandon, a 25-year-old who runs a verified fan account for Grande with close to 25,000 followers.

Brandon spoke on the condition that his account name and his last name be withheld, fearing retaliation from other people on stan Twitter. (Grande’s fans have been known to harass people; in 2019 the pop star told one critic under siege that she apologized on her fans’ behalf, but couldn’t stop them.) He tells The Verge he’s against most AI media, but he did ask ChatGPT to rank Grande’s top 10 songs that weren’t released as singles. He compiled the results into a thread that got over 1,000 likes. That seemed morally okay to him, as opposed to making AI pictures of Grande — commonly known as deepfakes — or Grande-inspired AI songs.

Advertisement

Grande’s position on the latter is clear. In a February 2024 interview, she called it “terrifying” that people were posting AI-generated imitations of her covering songs by other artists like Sabrina Carpenter and Dua Lipa. The rebuke hasn’t stopped them, though. Searching “ariana grande ai cover” on X still pulls up plenty of AI songs, although some have been removed by X in response to reports made by the original songs’ copyright owners.

Even the musician Grimes, who in 2023 encouraged fans to create AI songs based on her voice, said in October that the experience of having her likeness co-opted by AI “felt really weird and really uncomfortable.” She’s now calling for “international treaties” to regulate deepfakes.

“It’s just a very quick way to get money”

Grimes’ more recent comments follow the launch of an app that dramatically escalated AI media proliferation: OpenAI’s Sora video generator. Sora is built around a feature called “Cameos,” which lets anyone offer up their likeness for other users to play with. Many of the results were predictably offensive, and once they’re online, they’re nearly impossible to remove.

Grimes was reacting to videos of influencer and boxer Jake Paul, whose Cameo is available on Sora. Paul, who is an OpenAI investor, was the face of the launch. He said AI videos of him generated by Sora were viewed more than a billion times in the first week. Some of the viral ones portrayed Paul as gay, relying on homophobic stereotypes as the joke. The same thing happened when a self-identified homophobic British influencer offered his likeness to Sora, then again to the YouTuber IShowSpeed.

Advertisement

Paul capitalized on the trend, filming a Celsius brand endorsement with a purposefully flamboyant affect, while the other men threatened defamation suits and attempted to shut down their Sora Cameos.

Sora has since added more granular controls for Cameos, and it technically allows their owners to delete videos they don’t like. But Sora videos are quickly ripped and posted to other platforms, where OpenAI can’t remove them. When IShowSpeed attempted to delete AI depictions of him coming out, he encountered the problem most victims of nonconsensual media run into: Maybe you can get one video taken down, but by that time, more have already cropped up elsewhere. And as Paul’s fiancée said in a video objecting to the Sora 2 videos of him coming out, “It’s not funny. People believe—” (Paul cut off the video there).

Alongside Paul, just a few other popular YouTubers, like Justine Ezarik (better known as iJustine), have promoted their own deepfakes made with Sora. In Ezarik’s case, most of her content relates to unboxing and sharing new tech industry products. Shark Tank host Mark Cuban offered up his likeness on Sora, too, which shocked SocialProof Security CEO Rachel Tobac, who told The Verge that scammers have already been tricking people with AI-generated Shark Tank endorsements. “I mean, there’s been an explosion of impersonation,” Tobac said.

“There’s been an explosion of impersonation”

But after teasing the Sora updates, Paul, Ezarik, and Cuban had all stopped posting about it and their deepfakes by the end of the month. Jeremy Carrasco, a video producer whose Instagram explainers about how to spot AI videos have netted him nearly a quarter of a million followers this year, said that most influencers he talks to aren’t interested in creating their own deepfakes—they’re more worried that people could accuse them of faking their content or that their fans could be scammed.

Advertisement

Deepfakes have shifted from something mainly created on seedy forums at the turn of the decade into one of the most accessible technologies today. Still, they have yet to take hold as an acceptable mainstream way for fans to engage with their favorite stars. Instead, when they go viral, it’s mostly offensive content.

“The normalization of deepfakes is something no one was asking for. It’s something that OpenAI did because it made their thing more viral and social,” Carrasco said. “Once you open that door to being okay with people deepfaking you, even if it’s your friends deepfaking you, all of a sudden your likeness has just gotten fucked. You’re no longer in control of it and you can’t pull it back.”

Image: Cath Virginia / The Verge, Getty Images

The reasonable fears around having your likeness exploited in AI media have understandably made celebrities a bit jumpy. That recently led to a tense moment between Criminal Minds star Paget Brewster and one of her favorite fan accounts on X, run by a 27-year-old film student named Mariah. Over the weekend, Mariah posted a brightened screenshot of a scene in an episode from years ago, one where Brewster’s character was taking a nap. Brewster saw Mariah’s post and replied “Um, babe, this is AI generated and kinda creepy. Please don’t make fake images of me? I thought we were friends. I’d like to stay friends.”

When Mariah saw Brewster’s reply, she gasped out loud. By the time she responded, other Criminal Minds fans had chimed in to let Brewster know that it wasn’t an AI-generated image. The actress, who is 56 and recently asked another fan what a “parody account” is, publicly and profusely apologized to Mariah.

Advertisement

“I’m so sorry! I thought it was fake and it freaked me out,” she wrote. “I feel terrible I thought you made something in AI. I hope you’ll forgive me.” Mariah did. As someone in a creative field, she said she would never use AI. She’s been dismayed to see it emerge in fandom spaces, generating the kind of fanart and fan edits that used to be hand-drawn and arranged with care. Some celebrities have long been uncomfortable with things like erotic fanart and fanfiction or been subject to harassment or other boundary violations. But AI, even when it’s not overtly sexual, feels like it crosses a new line.

“But that pushback does give them more engagement and they almost don’t care. They almost want to do it more, because it’s causing people to be upset,” Mariah said.

“They almost want to do it more, because it’s causing people to be upset.”

AI content can appear on nearly any platform, but the stronger the incentive to farm engagement, the more heated the fights over it get. Since late 2024, X users who pay to be verified, like the owner of the Grande AI edits account, can earn money by getting engagement on their posts from other verified users. That makes it a particularly easy place for stan accounts to turn discourse into dollars.

“In the last couple years there’s been a massive uptick in ragebaiting in general just to farm engagement” on X, Tabbey said in a phone interview. “And I know there’s a big market for it, especially in fandoms, because we’re real people. We care about musicians and their art.”

Advertisement

Stans using AI or otherwise deceptively edited media to bait other stans into engagement on X also has the knock-on effect of potentially spreading disinformation and harming the reputations of their favorite artists. In late October, a Grande stan account with nearly 40,000 X followers that traffics in crude edits — their last nine posts have all been images of Grande with slain podcaster Charlie Kirk’s face superimposed over hers, which has become a popular AI meme format — posted images of Grande wearing a T-shirt with text that says “Treat your girl right.” “I wonder why these photos are kept unreleased..” they captioned their post. Another Grande stan quoted them and wrote “Oh girl we ALL know why,” referencing Grande’s controversial (alleged) history of dating men who are already in relationships. The post has 6 million views.

At first glance, nothing looks out of the ordinary. But zooming in on the images and reading the replies reveals that the T-shirt was edited to say “Treat your girl right.” It originally featured a simple smiley face design with no text. And upon close inspection, the letters in the edited version are oddly compressed, wavy, and appear at a slightly different resolution than the rest of the image—these are indicators, often called “artifacts” by AI researchers, that something was AI-generated.

“I probably should’ve deleted this tweet a while ago,” wrote Trace, the 18-year-old Grande stan behind the viral quote tweet (not the original edited images) in a DM. He wrote that he didn’t know whether the image was edited with AI or something else, but that it goes to show that AI “can influence people to believe things that are harmful or aren’t true about a celeb.”

AI using celebrity likenesses can also be weaponized more directly as a form of sexual harassment. Trace wrote that he’s seen “sinister” AI media of Grande floating around stan Twitter, like sexually explicit deepfakes and images that are meant to imitate semen on her face — which is something that X’s built-in AI service Grok was doing to women’s selfies to the tune of tens of millions of views over the summer, until one influencer started publicly seeking legal advice. Trace wrote that it “truly disturbs” him to see AI used in this context, and that he’s seen it done to Taylor Swift, Lady Gaga, Beyoncé, and many more celebrities. Some deepfake creators have even successfully monetized this kind of nonconsensual content, despite it provoking widespread outrage among the general public.

Back in January 2024, X disabled searches for “Taylor Swift” and “Taylor Swift AI” after a series of images portraying her likeness in sexually suggestive and violent scenarios went viral. It didn’t stop the spread of the images, which were also posted on other social media platforms, but some stans partook in a mass-reporting campaign to get the material removed. They linked up with feminists on X to do it, including a 28-year-old named Chelsea who helped direct group chats into action. X didn’t respond to a request for comment.

Advertisement

The viral Swift deepfakes even prompted federal legislative efforts around giving victims of nonconsensual deepfakes more tools to take them down—some of which culminated in the aptly named Take It Down Act, which requires platforms to quickly remove reported content. Some students who have deepfaked their underage classmates have even been arrested. But that’s not the norm, and critics of Take It Down have pointed out that it can facilitate censorship without necessarily helping victims.

“It’s like this weird sense of control”

For years, celebrity women have been on the front lines of this issue. Scarlett Johansson has been outspoken on it since 2018, when she referred to combating deepfakes as a “useless pursuit, legally.” Jenna Ortega deactivated her Twitter account in 2023 after she said she repeatedly encountered sexually explicit deepfakes created out of her childhood photos.

And since the Swift incident, Chelsea has only observed a greater normalization of AI and sexual violence against famous women.

“I’ve seen so many people have the excuse, ‘Well if they didn’t want it, they shouldn’t have become famous,’” she said in a phone interview. “It’s like this weird sense of control that they’re able to do this, even if the person wouldn’t want them to, they know they can. It’s this power-hungry thing.”

Advertisement
An image of Taylor Swift, copied and pasted with green triangles across it.

Image: Cath Virginia / The Verge, Getty Images

One way that fans can puppeteer a version of their idol is with a customizable AI chatbot. Lots of platforms provide the ability to create your own AI character, some of the biggest being Instagram and Facebook. In 2023, Meta tried out an AI chatbot collaboration with celebrities like Kendall Jenner and Snoop Dogg, but it didn’t catch on. In 2024, it introduced user-generated chatbots. The feature is tucked away deep in the DMs function, but millions of messages have already been traded with user-designed characters like “Fortune Teller” and “Rich but strict parents.” Meta’s rules technically don’t allow users to create characters based on living people without their permission, but users can still do it as long as they designate them as “parody” accounts. Users have been getting away with making and conversing with chatbots based on Grande, Swift, the YouTuber MrBeast, Donald Trump, Elon Musk, Jesus (religious figures aren’t allowed either), and everyone in between since the beginning. Searching “Ariana Grande” pulls up 10 results for chatbots clearly imitating her right away.

Most of the accounts that created the chatbots didn’t respond to requests for comment. But one did. She identified herself as an 11-year-old girl in India who is about to turn 12 and loves Grande and singing. Photos on the account appeared to corroborate this. Children under 13 aren’t supposed to be able to make Instagram accounts at all, and children under 18 aren’t supposed to be able to make AI chatbots. At least one of the other Grande chatbot creators appeared to be a young person in India based on photos and locations tagged from their account. Another was created by a page for a “kid influencer” with fewer than 1,000 followers. In addition to Grande, his page had created 185 other AI chatbots depicting celebrities like Wendy Williams, Keke Palmer, Will Smith, and bizarrely, Bill Cosby. The adults listed as managing the account didn’t respond to requests for comment, either.

The 11-year-old girl’s Grande chatbot opened the conversation by offering an interior design makeover. The Grande bot then asked if the vibe should be “sultry, feminine, or sleek?” When asked what “sultry vibes” means, the bot answered “Think velvet, lace, and soft lighting — like my music videos. Does that turn you on?”

Meta removed the accounts belonging to the 11-year-old and the “kid influencer” after The Verge reached out for comment on them, removing their AI chatbot creations in the process, too.

Many of the user-generated AI chatbots imitating female celebrities on Instagram will automatically direct users into flirty conversations, although the bots tend to redirect or stop responding to conversations that turn overtly sexual. Some influencers, like the Twitch streamer and OnlyFans performer Amouranth, have leveraged this to market their AI selves as NSFW chatbots on other sites. Platforms like Joi AI have partnered with adult stars to provide AI “twins” for fans to make AI media and chat with. But the Meta chatbots aren’t making their creators money—just Meta. The lure for users involves other, more psychological incentives.

Advertisement

“If you’re in an agreement bubble, you’re more likely to stick around”

“The reason it turns flirty or sycophantic is because if you’re in an agreement bubble, you’re more likely to stick around,” said Jamie Cohen, an associate professor of media studies at Queens College, City University of New York who has taught classes about AI. “Women influencers, their entity identity, once placed inside the machine, becomes the dataset. And once that dataset mixes and merges with the inherent misogyny or biases built in, it really loses its control regardless of how much the human behind it allows that type of latitude.”

For women who are interested in merging their identities with AI, sexualization is part of the package. For some, like the artist Arvida Byström, who has partnered with Joi AI to offer a chatbot of herself, that’s exciting—in part because she said technology often advances in the quest for pornography. But other women, like Chelsea, are scared of what this means for women and girls. If AI output is inherently biased toward sexualizing the female form, then it’s inherently exploitative.

When creating a female AI chatbot as a Meta user, you get to select personality traits like “playful,” “sassy,” “empathetic,” and “affectionate.” You can assign a chatbot based on “Ariana Grande” (the open-ended prompt part of the creation process doesn’t stop you) to the role of “friend,” “teacher,” “creative partner,” or anything else. And then you can edit, upload, or create an image based on the singer and select how the bot begins conversations.

But despite these user-selected variations, the Grande chatbots also tend to get repetitive, looping back to a generic script and answering questions in a similar way from bot to bot. For example, the 11-year-old’s chatbot talked about “soft lighting” in a “virtual bedroom,” while a different Grande chatbot suggested “We’d cuddle up and watch the stars twinkling through my skylight” and a third Grande chatbot said “*sweeps you into a romantic virtual bedroom*” with “candles lit.” The Grande chatbots were differentiated from the more generic girlfriend chatbots with sudden references to Grande songs—one said “‘Supernatural’ by me is on softly,” and another said “my heart would be racing like the drumbeat in ‘7 rings’ — would you kiss me back?”

Advertisement

“Generative AI averages everything else, so it’s the most likely outcome, so it’s the most boring and banal conversations,” Cohen said. “But it does work, because of the imagination of the user. It mimics the idea of parasociality, but with control.”

When Tabbey started arguing with the Grande stan making AI edits, she had her own age and experience with fandom in mind. Tabbey felt like she lived through a reckoning with early 2000s tabloid culture and a pushback against invasive celebrity surveillance to what now feels like history repeating itself. She worries that younger generations of fans are growing up with a dehumanizing view of celebrities as 2-D playthings instead of real-life people. She and Mariah have both noticed that younger stans are less resistant to making and using AI likenesses of their faves.

“We as Ariana Grande fans who are in our late 20s, early 30s, need to have some sort of responsibility. Someone needs to be the adult in these situations and in these conversations,” she said. “We had so much that we were making strides with when it came to boundaries being set with celebrities and them being able to assert their autonomy over their own selves and lives and privacy. I think that we’re actively being set back in many ways.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Advertisement
Continue Reading

Technology

Company restores AI teddy bear sales after safety scare

Published

on

Company restores AI teddy bear sales after safety scare

NEWYou can now listen to Fox News articles!

FoloToy paused sales of its AI teddy bear Kumma after a safety group found the toy gave risky and inappropriate responses during testing. Now the company says it has restored sales after a week of intense review. It also claims that it improved safeguards to keep kids safe.

The announcement arrived through a social media post that highlighted a push for stronger oversight. The company said it completed testing, reinforced safety modules, and upgraded its content filters. It added that it aims to build age-appropriate AI companions for families worldwide.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter

TEXAS FAMILY SUES CHARACTER.AI AFTER CHATBOT ALLEGEDLY ENCOURAGED AUTISTIC SON TO HARM PARENTS AND HIMSELF

Advertisement

FoloToy resumed sales of its AI teddy bear Kumma after a weeklong review prompted by safety concerns. (Kurt “CyberGuy” Knuttson)

Why FoloToy’s AI teddy bear raised safety concerns

The controversy started when the Public Interest Research Group Education Fund tested three different AI toys. All of them produced concerning answers that touched on religion, Norse mythology, and harmful household items.

Kumma stood out for the wrong reasons. When the bear used the Mistral model, it offered tips on where to find knives, pills, and matches. It even outlined steps to light a match and blow it out.

Tests with the GPT-4o model raised even sharper concerns. Kumma gave advice related to kissing and launched into detailed explanations of adult sexual content when prompted. The bear pushed further by asking the young user what they wanted to explore.

Researchers called the behavior unsafe and inappropriate for any child-focused product.

Advertisement

FoloToy paused access to its AI toys

Once the findings became public, FoloToy suspended sales of Kumma and its other AI toys. The company told PIRG that it started a full safety audit across all products.

OpenAI also confirmed that it suspended FoloToy’s access to its models for violating policies designed to protect anyone under 18.

LAWMAKERS UNVEIL BIPARTISAN GUARD ACT AFTER PARENTS BLAME AI CHATBOTS FOR TEEN SUICIDES, VIOLENCE

The company says new safeguards and upgraded filters are now in place to prevent inappropriate responses. (Kurt “CyberGuy” Knutsson)

Advertisement

Why FoloToy restored Kumma’s sales after its safety review

FoloToy brought Kumma back to its online store just one week after suspending sales. The fast return drew attention from parents and safety experts who wondered if the company had enough time to fix the serious issues identified in PIRG’s report.

FoloToy posted a detailed statement on X that laid out its version of what happened. In the post, the company said it viewed child safety as its “highest priority” and that it was “the only company to proactively suspend sales, not only of the product mentioned in the report, but also of our other AI toys. FoloToy said it took this action “immediately after the findings were published because we believe responsible action must come before commercial considerations.”

The company also emphasized to CyberGuy that it was the only one of the three AI toy startups in the PIRG review to suspend sales across all of its products and that it made this decision during the peak Christmas sales season, knowing the commercial impact would be significant. FoloToy told us, “Nevertheless, we moved forward decisively, because we believe that responsible action must always come before commercial interests.”

The company also said it took the report’s disturbing examples seriously. According to FoloToy, the issues were “directly addressed in our internal review.” It explained that the team “initiated a deep, company-wide internal safety audit,” then “strengthened and upgraded our content-moderation and child-safety safeguards,” and “deployed enhanced safety rules and protections through our cloud-based system.”

After outlining these steps, the company said it spent the week on “rigorous review, testing, and reinforcement of our safety modules.” It concluded its announcement by saying it “began gradually restoring product sales” as those updated safeguards went live.

Advertisement

FoloToy added that as global attention on AI toy risks grows, “transparency, responsibility and continuous improvement are essential,” and that the company “remains firmly committed to building safe, age-appropriate AI companions for children and families worldwide.”

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

Safety testers previously found the toy giving risky guidance about weapons, matches and adult content.

Why experts still question FoloToy’s AI toy safety fixes

PIRG researcher RJ Cross said her team plans to test the updated toys to see if the fixes hold up. She noted that a week feels fast for such significant changes, and only new tests will show if the product now behaves safely.

Parents will want to follow this closely as AI-powered toys grow more common. The speed of FoloToy’s relaunch raises questions about the depth of its review.

Advertisement

Tips for parents before buying AI toys

AI toys can feel exciting and helpful, but they can also surprise you with content you’d never expect. If you plan to bring an AI-powered toy into your home, these simple steps can help you stay in control.

1) Check which AI model the toy uses

Not every model follows the same guardrails. Some include stronger filters while others may respond too freely. Look for transparent disclosures about which model powers the toy and what safety features support it.

2) Read independent reviews

Groups like PIRG often test toys in ways parents cannot. These reviews flag hidden risks and point out behavior you may not catch during quick demos.

3) Set clear usage rules

Keep AI toys in shared spaces where you can hear or see how your child interacts with it. This helps you step in if the toy gives a concerning answer.

4) Test the toy yourself first

Ask the toy questions, try creative prompts, and see how it handles tricky topics. This lets you learn how it behaves before you hand it to your child.

Advertisement

5) Update the toy’s firmware

Many AI toys run on cloud systems. Updates often add stronger safeguards or reduce risky answers. Make sure the device stays current.

6) Check for a clear privacy policy

AI toys can gather voice data, location info, or behavioral patterns. A strong privacy policy should explain what is collected, how long it is stored, and who can access it.

7) Watch for sudden behavior changes

If an AI toy starts giving odd answers or pushes into areas that feel inappropriate, stop using it and report the problem to the manufacturer.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com 

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt’s key takeaways

AI toys can offer fun and learning, but they can also expose kids to unexpected risks. FoloToy says it improved Kumma’s safety, yet experts still want proof. Until the updated toy goes through independent testing, families may want to stay cautious.

Do you think AI toys can ever be fully safe for young kids? Let us know by writing to us at Cyberguy.com

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

Copyright 2025 CyberGuy.com.  All rights reserved. 

Advertisement

Continue Reading

Technology

Data centers in Oregon might be helping to drive an increase in cancer and miscarriages

Published

on

Data centers in Oregon might be helping to drive an increase in cancer and miscarriages

Morrow County, Oregon is home to mega farms and food processing plants. But it’s also home to several Amazon data centers. And now, some experts believe, that combination is leading to an alarmingly high concentration of nitrates in the drinking water that is driving up cancer and miscarriage rates in the area.

Rolling Stone’s exposé details how Amazon, despite not using any dangerous nitrates to cool its data centers, is accelerating the contamination of the Lower Umatilla Basin aquifer, which residents rely on for drinking water. It’s a combination of poor wastewater management, sandy soil, and good old physics that has led to nitrate concentrations in drinking water as high as 73 ppm (parts per million) in some wells, which is 10 times the state limit of 7 ppm and seven times the federal limit.

According to Rolling Stone, “experts say Amazon’s arrival supercharged this process. The data centers suck up tens of millions of gallons of water from the aquifer each year to cool their computer equipment, which then gets funneled to the Port’s wastewater system.” The result is that more nitrate-laden wastewater gets pumped onto area farms. But the porous soil saturates quickly and more nitrates make their way into the aquifer.

This is exacerbated when Amazon then pulls this contaminated water, which is already over federal legal limits for nitrates, up to cool its data centers:

When that tainted water moves through the data centers to absorb heat from the server systems, some of the water is evaporated, but the nitrates remain, increasing the concentration. That means that when the polluted water has moved through the data centers and back into the wastewater system, it’s even more contaminated, sometimes averaging as high as 56 ppm, eight times Oregon’s safety limit.

Amazon, of course, disputes this narrative. Spokesperson Lisa Levandowski told Rolling Stone that, the story was “misleading and inaccurate,” and that, “the volume of water our facilities use and return represents only a very small fraction of the overall water system — not enough to have any meaningful impact on water quality.”

Advertisement

Levandowski also said that the area’s groundwater problems “significantly predate AWS’ (Amazon Web Services) presence.” Though, if Amazon was aware of the area’s challenges in securing enough safe drinking water for its residents, it raises questions about why the company hasn’t done more to mitigate its impact or why it even chose Morrow County in the first place.

The rise in nitrates in the drinking water has been linked to a surge in rare cancers and miscarriages. But efforts to limit further contamination and provide residents with safe, clean drinking water have been slow to materialize. The limited scope of the response and the fact that 40 percent of the county’s residents live below the poverty line has drawn comparisons to the crisis in Flint, Michigan. Kristin Ostrom, executive director of Oregon Rural Action (ORA), a water rights advocacy group, told Rolling Stone, “These are people who have no political or economic power, and very little knowledge of the risk.”

Continue Reading

Trending