In the 2000s, TiVo reached heights few companies ever achieve. Like Google and Xerox, its name became a verb. People had to “TiVo” the new episode of Battlestar Galactica or game 4 of the Red Sox vs. Cardinals, not “record” it. While it didn’t invent the DVR, TiVo popularized it and many of the features we would eventually take for granted, like the ability to pause or rewind live TV, and watch one program while recording another.
Technology
TiVo won the court battles, but lost the TV war
Those features were covered in the now infamous US Patent 6,233,389 — better known as the Time Warp patent. TiVo spent a good chunk of the 2000s and early 2010s defending its intellectual property through a series of high-profile lawsuits, most notably against EchoStar. That particular saga lasted for the better part of a decade, with TiVo originally filing the suit in January of 2004 and the final $500 million settlement being awarded in April of 2011.
But TiVo spent much of its prime years locked in court battles with major players in the television and digital video space. Motorola, Time Warner Cable, AT&T, Dish Network, Cisco, and Verizon all found themselves on the receiving end of a patent infringement lawsuit from TiVo. TiVo came out victorious in almost every single one. The US Patent Office even agreed to reexamine the patent on two separate occasions and reaffirmed its claims.
If the company had been focused on revenue sources outside the courtroom, it could have been at the forefront of the smart TV rollout.
Licensing its technology became the primary way TiVo made money as it entered the 2010s. The problem was, by then, the writing was on the wall. Netflix launched its streaming service in January 2007. Hulu entered beta later that year and launched publicly in March of 2008. That year also marked the launch of Roku’s first device and the earliest models of modern smart TVs, like the Samsung PAVV Bordeaux TV 750.
DVRs became standard issue with most cable TV packages. Sure, TiVo’s interface was slicker, and it had advanced features, such as remotely scheduling recordings via TiVo Central Online or transferring them to a computer with TiVoToGo. But spending $200 or more on a separate DVR in 2008 (at least if you wanted HD tuners), plus an additional subscription cost on top of your cable bill, was an increasingly hard sell when Time Warner would give you a DVR that was good enough.
Roku was offering simple-to-use streaming set-top boxes at impulse purchase prices — as low as $49.99 by 2011. Google pushed prices even lower with the Chromecast in 2013. Smart TV operating systems were becoming increasingly capable. TiVo was adding support for Netflix, Hulu, and other streaming services, but it seemed to constantly be playing catch-up as it entered the new decade.
TiVo’s hardware had stagnated. It was wasting time on features like the ability to order Domino’s from your TV. And its biggest money maker — a patent focused on manipulating broadcast television — was increasingly becoming obsolete as cord-cutting began to grow in popularity.
According to nScreenMedia traditional pay TV subscriptions peaked in the US in 2010 at around 103 million, or roughly 89 percent of households. In 2025, that number is down to just 49.6 million, or 37.6 percent of households. The most popular streaming services are now easily outpacing linear pay TV as they copy some of its moves by leaning into live content anchored by sports and other spectacles that draw eyeballs to now-unskippable ads. At the end of 2024, Netflix had 89.6 million subscribers and Disney Plus 56.8 million in the US and Canada. (The companies report subscriptions by region only, not country.) As TiVo continued to battle companies like Google and Time Warner in court, its customer base was drying up.
TiVo was eventually purchased by Rovi, a company whose primary business is hoarding patents and either licensing them to other companies or suing companies in order to force them to license their technology. This, sadly, was to be TiVo’s fate going forward. When it was purchased by tech licensing firm Xperi in 2020, the press release announcing the merger didn’t tout best-in-class hardware or innovative set-top box software. Instead, it bragged about having “one of the industry’s largest and most diverse intellectual property (IP) licensing platforms.”
After its merger with Xperi, TiVo wouldn’t launch another set-top box. Its last model, the TiVo Edge, was released in 2019. And this month, the company confirmed it had quietly sold the last of its stock on September 30th and would be exiting the hardware business.
TiVo says it plans to focus on its fledgling smart TV OS — a move that’s probably 15 years too late. Perhaps if the company had been focused on revenue sources outside the courtroom, it could have been at the forefront of the smart TV rollout. Maybe it could have developed its own streaming-first device that was more than a lazy (and late) reskin of Android TV. TiVo’s UI and iconic peanut remote were beloved. Its brand was a household name. But, rather than build a platform to power the next generation of televisions, it seemed focused on milking every dollar out of companies clearly heading towards obsolescence.
Technology
Fandoms are cashing in on AI deepfakes
Madison Lawrence Tabbey was scrolling through X in late October when a post from a Wicked update account caught her attention. Ariana Grande, who stars in the movies as Glinda, had just liked a meme on Instagram about never wanting to see another AI-generated image again. Grande had also purportedly blocked a fan account that had made AI edits of her.
As Tabbey read through the mostly sympathetic replies, a very different message caught her eye. It was from a fellow Grande fan whose profile was mostly AI edits, showing Grande with different hairstyles and outfits. And, their reply said, they weren’t going to stop. Tabbey, a 33-year-old living in Nashville, Tennessee, couldn’t help but start arguing with them. “Oh so you were SERIOUS when you said you don’t care about poor communities not having water so that you can make AI pictures of ariana grande?” she shot back, referencing data centers draining resources and polluting cities like nearby Memphis. The account fired back at first, but amid a swarm of angry responses, it deactivated a few days later. It seemed like the owner wanted to argue and make people mad, but they might have taken things too far.
Grande is one of many celebrities and influencers who have openly rejected AI media exploiting their likenesses, but who continue to be prominently featured in it anyway, even among people who call themselves fans. As AI images and videos become ever simpler to produce, celebrities are facing down a mix of unsettled social norms and the incentives of an internet attention economy. And on “stan Twitter,” where pop culture accounts have grown into a lucrative fan-made media ecosystem, AI content has emerged as a growing genre, despite — or maybe because of — the outrage it provokes.
“Stan Twitter is very against AI just in general. So this goes against what people believe in, so then they’ll instantly get a comment, they’ll have the AI people retweet it, like it. So it’s just a very quick way to get money,” said Brandon, a 25-year-old who runs a verified fan account for Grande with close to 25,000 followers.
Brandon spoke on the condition that his account name and his last name be withheld, fearing retaliation from other people on stan Twitter. (Grande’s fans have been known to harass people; in 2019 the pop star told one critic under siege that she apologized on her fans’ behalf, but couldn’t stop them.) He tells The Verge he’s against most AI media, but he did ask ChatGPT to rank Grande’s top 10 songs that weren’t released as singles. He compiled the results into a thread that got over 1,000 likes. That seemed morally okay to him, as opposed to making AI pictures of Grande — commonly known as deepfakes — or Grande-inspired AI songs.
Grande’s position on the latter is clear. In a February 2024 interview, she called it “terrifying” that people were posting AI-generated imitations of her covering songs by other artists like Sabrina Carpenter and Dua Lipa. The rebuke hasn’t stopped them, though. Searching “ariana grande ai cover” on X still pulls up plenty of AI songs, although some have been removed by X in response to reports made by the original songs’ copyright owners.
Even the musician Grimes, who in 2023 encouraged fans to create AI songs based on her voice, said in October that the experience of having her likeness co-opted by AI “felt really weird and really uncomfortable.” She’s now calling for “international treaties” to regulate deepfakes.
“It’s just a very quick way to get money”
Grimes’ more recent comments follow the launch of an app that dramatically escalated AI media proliferation: OpenAI’s Sora video generator. Sora is built around a feature called “Cameos,” which lets anyone offer up their likeness for other users to play with. Many of the results were predictably offensive, and once they’re online, they’re nearly impossible to remove.
Grimes was reacting to videos of influencer and boxer Jake Paul, whose Cameo is available on Sora. Paul, who is an OpenAI investor, was the face of the launch. He said AI videos of him generated by Sora were viewed more than a billion times in the first week. Some of the viral ones portrayed Paul as gay, relying on homophobic stereotypes as the joke. The same thing happened when a self-identified homophobic British influencer offered his likeness to Sora, then again to the YouTuber IShowSpeed.
Paul capitalized on the trend, filming a Celsius brand endorsement with a purposefully flamboyant affect, while the other men threatened defamation suits and attempted to shut down their Sora Cameos.
Sora has since added more granular controls for Cameos, and it technically allows their owners to delete videos they don’t like. But Sora videos are quickly ripped and posted to other platforms, where OpenAI can’t remove them. When IShowSpeed attempted to delete AI depictions of him coming out, he encountered the problem most victims of nonconsensual media run into: Maybe you can get one video taken down, but by that time, more have already cropped up elsewhere. And as Paul’s fiancée said in a video objecting to the Sora 2 videos of him coming out, “It’s not funny. People believe—” (Paul cut off the video there).
Alongside Paul, just a few other popular YouTubers, like Justine Ezarik (better known as iJustine), have promoted their own deepfakes made with Sora. In Ezarik’s case, most of her content relates to unboxing and sharing new tech industry products. Shark Tank host Mark Cuban offered up his likeness on Sora, too, which shocked SocialProof Security CEO Rachel Tobac, who told The Verge that scammers have already been tricking people with AI-generated Shark Tank endorsements. “I mean, there’s been an explosion of impersonation,” Tobac said.
“There’s been an explosion of impersonation”
But after teasing the Sora updates, Paul, Ezarik, and Cuban had all stopped posting about it and their deepfakes by the end of the month. Jeremy Carrasco, a video producer whose Instagram explainers about how to spot AI videos have netted him nearly a quarter of a million followers this year, said that most influencers he talks to aren’t interested in creating their own deepfakes—they’re more worried that people could accuse them of faking their content or that their fans could be scammed.
Deepfakes have shifted from something mainly created on seedy forums at the turn of the decade into one of the most accessible technologies today. Still, they have yet to take hold as an acceptable mainstream way for fans to engage with their favorite stars. Instead, when they go viral, it’s mostly offensive content.
“The normalization of deepfakes is something no one was asking for. It’s something that OpenAI did because it made their thing more viral and social,” Carrasco said. “Once you open that door to being okay with people deepfaking you, even if it’s your friends deepfaking you, all of a sudden your likeness has just gotten fucked. You’re no longer in control of it and you can’t pull it back.”
Image: Cath Virginia / The Verge, Getty Images
The reasonable fears around having your likeness exploited in AI media have understandably made celebrities a bit jumpy. That recently led to a tense moment between Criminal Minds star Paget Brewster and one of her favorite fan accounts on X, run by a 27-year-old film student named Mariah. Over the weekend, Mariah posted a brightened screenshot of a scene in an episode from years ago, one where Brewster’s character was taking a nap. Brewster saw Mariah’s post and replied “Um, babe, this is AI generated and kinda creepy. Please don’t make fake images of me? I thought we were friends. I’d like to stay friends.”
When Mariah saw Brewster’s reply, she gasped out loud. By the time she responded, other Criminal Minds fans had chimed in to let Brewster know that it wasn’t an AI-generated image. The actress, who is 56 and recently asked another fan what a “parody account” is, publicly and profusely apologized to Mariah.
“I’m so sorry! I thought it was fake and it freaked me out,” she wrote. “I feel terrible I thought you made something in AI. I hope you’ll forgive me.” Mariah did. As someone in a creative field, she said she would never use AI. She’s been dismayed to see it emerge in fandom spaces, generating the kind of fanart and fan edits that used to be hand-drawn and arranged with care. Some celebrities have long been uncomfortable with things like erotic fanart and fanfiction or been subject to harassment or other boundary violations. But AI, even when it’s not overtly sexual, feels like it crosses a new line.
“But that pushback does give them more engagement and they almost don’t care. They almost want to do it more, because it’s causing people to be upset,” Mariah said.
“They almost want to do it more, because it’s causing people to be upset.”
AI content can appear on nearly any platform, but the stronger the incentive to farm engagement, the more heated the fights over it get. Since late 2024, X users who pay to be verified, like the owner of the Grande AI edits account, can earn money by getting engagement on their posts from other verified users. That makes it a particularly easy place for stan accounts to turn discourse into dollars.
“In the last couple years there’s been a massive uptick in ragebaiting in general just to farm engagement” on X, Tabbey said in a phone interview. “And I know there’s a big market for it, especially in fandoms, because we’re real people. We care about musicians and their art.”
Stans using AI or otherwise deceptively edited media to bait other stans into engagement on X also has the knock-on effect of potentially spreading disinformation and harming the reputations of their favorite artists. In late October, a Grande stan account with nearly 40,000 X followers that traffics in crude edits — their last nine posts have all been images of Grande with slain podcaster Charlie Kirk’s face superimposed over hers, which has become a popular AI meme format — posted images of Grande wearing a T-shirt with text that says “Treat your girl right.” “I wonder why these photos are kept unreleased..” they captioned their post. Another Grande stan quoted them and wrote “Oh girl we ALL know why,” referencing Grande’s controversial (alleged) history of dating men who are already in relationships. The post has 6 million views.
At first glance, nothing looks out of the ordinary. But zooming in on the images and reading the replies reveals that the T-shirt was edited to say “Treat your girl right.” It originally featured a simple smiley face design with no text. And upon close inspection, the letters in the edited version are oddly compressed, wavy, and appear at a slightly different resolution than the rest of the image—these are indicators, often called “artifacts” by AI researchers, that something was AI-generated.
“I probably should’ve deleted this tweet a while ago,” wrote Trace, the 18-year-old Grande stan behind the viral quote tweet (not the original edited images) in a DM. He wrote that he didn’t know whether the image was edited with AI or something else, but that it goes to show that AI “can influence people to believe things that are harmful or aren’t true about a celeb.”
AI using celebrity likenesses can also be weaponized more directly as a form of sexual harassment. Trace wrote that he’s seen “sinister” AI media of Grande floating around stan Twitter, like sexually explicit deepfakes and images that are meant to imitate semen on her face — which is something that X’s built-in AI service Grok was doing to women’s selfies to the tune of tens of millions of views over the summer, until one influencer started publicly seeking legal advice. Trace wrote that it “truly disturbs” him to see AI used in this context, and that he’s seen it done to Taylor Swift, Lady Gaga, Beyoncé, and many more celebrities. Some deepfake creators have even successfully monetized this kind of nonconsensual content, despite it provoking widespread outrage among the general public.
Back in January 2024, X disabled searches for “Taylor Swift” and “Taylor Swift AI” after a series of images portraying her likeness in sexually suggestive and violent scenarios went viral. It didn’t stop the spread of the images, which were also posted on other social media platforms, but some stans partook in a mass-reporting campaign to get the material removed. They linked up with feminists on X to do it, including a 28-year-old named Chelsea who helped direct group chats into action. X didn’t respond to a request for comment.
The viral Swift deepfakes even prompted federal legislative efforts around giving victims of nonconsensual deepfakes more tools to take them down—some of which culminated in the aptly named Take It Down Act, which requires platforms to quickly remove reported content. Some students who have deepfaked their underage classmates have even been arrested. But that’s not the norm, and critics of Take It Down have pointed out that it can facilitate censorship without necessarily helping victims.
“It’s like this weird sense of control”
For years, celebrity women have been on the front lines of this issue. Scarlett Johansson has been outspoken on it since 2018, when she referred to combating deepfakes as a “useless pursuit, legally.” Jenna Ortega deactivated her Twitter account in 2023 after she said she repeatedly encountered sexually explicit deepfakes created out of her childhood photos.
And since the Swift incident, Chelsea has only observed a greater normalization of AI and sexual violence against famous women.
“I’ve seen so many people have the excuse, ‘Well if they didn’t want it, they shouldn’t have become famous,’” she said in a phone interview. “It’s like this weird sense of control that they’re able to do this, even if the person wouldn’t want them to, they know they can. It’s this power-hungry thing.”

Image: Cath Virginia / The Verge, Getty Images
One way that fans can puppeteer a version of their idol is with a customizable AI chatbot. Lots of platforms provide the ability to create your own AI character, some of the biggest being Instagram and Facebook. In 2023, Meta tried out an AI chatbot collaboration with celebrities like Kendall Jenner and Snoop Dogg, but it didn’t catch on. In 2024, it introduced user-generated chatbots. The feature is tucked away deep in the DMs function, but millions of messages have already been traded with user-designed characters like “Fortune Teller” and “Rich but strict parents.” Meta’s rules technically don’t allow users to create characters based on living people without their permission, but users can still do it as long as they designate them as “parody” accounts. Users have been getting away with making and conversing with chatbots based on Grande, Swift, the YouTuber MrBeast, Donald Trump, Elon Musk, Jesus (religious figures aren’t allowed either), and everyone in between since the beginning. Searching “Ariana Grande” pulls up 10 results for chatbots clearly imitating her right away.
Most of the accounts that created the chatbots didn’t respond to requests for comment. But one did. She identified herself as an 11-year-old girl in India who is about to turn 12 and loves Grande and singing. Photos on the account appeared to corroborate this. Children under 13 aren’t supposed to be able to make Instagram accounts at all, and children under 18 aren’t supposed to be able to make AI chatbots. At least one of the other Grande chatbot creators appeared to be a young person in India based on photos and locations tagged from their account. Another was created by a page for a “kid influencer” with fewer than 1,000 followers. In addition to Grande, his page had created 185 other AI chatbots depicting celebrities like Wendy Williams, Keke Palmer, Will Smith, and bizarrely, Bill Cosby. The adults listed as managing the account didn’t respond to requests for comment, either.
The 11-year-old girl’s Grande chatbot opened the conversation by offering an interior design makeover. The Grande bot then asked if the vibe should be “sultry, feminine, or sleek?” When asked what “sultry vibes” means, the bot answered “Think velvet, lace, and soft lighting — like my music videos. Does that turn you on?”
Meta removed the accounts belonging to the 11-year-old and the “kid influencer” after The Verge reached out for comment on them, removing their AI chatbot creations in the process, too.
Many of the user-generated AI chatbots imitating female celebrities on Instagram will automatically direct users into flirty conversations, although the bots tend to redirect or stop responding to conversations that turn overtly sexual. Some influencers, like the Twitch streamer and OnlyFans performer Amouranth, have leveraged this to market their AI selves as NSFW chatbots on other sites. Platforms like Joi AI have partnered with adult stars to provide AI “twins” for fans to make AI media and chat with. But the Meta chatbots aren’t making their creators money—just Meta. The lure for users involves other, more psychological incentives.
“If you’re in an agreement bubble, you’re more likely to stick around”
“The reason it turns flirty or sycophantic is because if you’re in an agreement bubble, you’re more likely to stick around,” said Jamie Cohen, an associate professor of media studies at Queens College, City University of New York who has taught classes about AI. “Women influencers, their entity identity, once placed inside the machine, becomes the dataset. And once that dataset mixes and merges with the inherent misogyny or biases built in, it really loses its control regardless of how much the human behind it allows that type of latitude.”
For women who are interested in merging their identities with AI, sexualization is part of the package. For some, like the artist Arvida Byström, who has partnered with Joi AI to offer a chatbot of herself, that’s exciting—in part because she said technology often advances in the quest for pornography. But other women, like Chelsea, are scared of what this means for women and girls. If AI output is inherently biased toward sexualizing the female form, then it’s inherently exploitative.
When creating a female AI chatbot as a Meta user, you get to select personality traits like “playful,” “sassy,” “empathetic,” and “affectionate.” You can assign a chatbot based on “Ariana Grande” (the open-ended prompt part of the creation process doesn’t stop you) to the role of “friend,” “teacher,” “creative partner,” or anything else. And then you can edit, upload, or create an image based on the singer and select how the bot begins conversations.
But despite these user-selected variations, the Grande chatbots also tend to get repetitive, looping back to a generic script and answering questions in a similar way from bot to bot. For example, the 11-year-old’s chatbot talked about “soft lighting” in a “virtual bedroom,” while a different Grande chatbot suggested “We’d cuddle up and watch the stars twinkling through my skylight” and a third Grande chatbot said “*sweeps you into a romantic virtual bedroom*” with “candles lit.” The Grande chatbots were differentiated from the more generic girlfriend chatbots with sudden references to Grande songs—one said “‘Supernatural’ by me is on softly,” and another said “my heart would be racing like the drumbeat in ‘7 rings’ — would you kiss me back?”
“Generative AI averages everything else, so it’s the most likely outcome, so it’s the most boring and banal conversations,” Cohen said. “But it does work, because of the imagination of the user. It mimics the idea of parasociality, but with control.”
When Tabbey started arguing with the Grande stan making AI edits, she had her own age and experience with fandom in mind. Tabbey felt like she lived through a reckoning with early 2000s tabloid culture and a pushback against invasive celebrity surveillance to what now feels like history repeating itself. She worries that younger generations of fans are growing up with a dehumanizing view of celebrities as 2-D playthings instead of real-life people. She and Mariah have both noticed that younger stans are less resistant to making and using AI likenesses of their faves.
“We as Ariana Grande fans who are in our late 20s, early 30s, need to have some sort of responsibility. Someone needs to be the adult in these situations and in these conversations,” she said. “We had so much that we were making strides with when it came to boundaries being set with celebrities and them being able to assert their autonomy over their own selves and lives and privacy. I think that we’re actively being set back in many ways.”
Technology
Company restores AI teddy bear sales after safety scare
NEWYou can now listen to Fox News articles!
FoloToy paused sales of its AI teddy bear Kumma after a safety group found the toy gave risky and inappropriate responses during testing. Now the company says it has restored sales after a week of intense review. It also claims that it improved safeguards to keep kids safe.
The announcement arrived through a social media post that highlighted a push for stronger oversight. The company said it completed testing, reinforced safety modules, and upgraded its content filters. It added that it aims to build age-appropriate AI companions for families worldwide.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
TEXAS FAMILY SUES CHARACTER.AI AFTER CHATBOT ALLEGEDLY ENCOURAGED AUTISTIC SON TO HARM PARENTS AND HIMSELF
FoloToy resumed sales of its AI teddy bear Kumma after a weeklong review prompted by safety concerns. (Kurt “CyberGuy” Knuttson)
Why FoloToy’s AI teddy bear raised safety concerns
The controversy started when the Public Interest Research Group Education Fund tested three different AI toys. All of them produced concerning answers that touched on religion, Norse mythology, and harmful household items.
Kumma stood out for the wrong reasons. When the bear used the Mistral model, it offered tips on where to find knives, pills, and matches. It even outlined steps to light a match and blow it out.
Tests with the GPT-4o model raised even sharper concerns. Kumma gave advice related to kissing and launched into detailed explanations of adult sexual content when prompted. The bear pushed further by asking the young user what they wanted to explore.
Researchers called the behavior unsafe and inappropriate for any child-focused product.
FoloToy paused access to its AI toys
Once the findings became public, FoloToy suspended sales of Kumma and its other AI toys. The company told PIRG that it started a full safety audit across all products.
OpenAI also confirmed that it suspended FoloToy’s access to its models for violating policies designed to protect anyone under 18.
LAWMAKERS UNVEIL BIPARTISAN GUARD ACT AFTER PARENTS BLAME AI CHATBOTS FOR TEEN SUICIDES, VIOLENCE
The company says new safeguards and upgraded filters are now in place to prevent inappropriate responses. (Kurt “CyberGuy” Knutsson)
Why FoloToy restored Kumma’s sales after its safety review
FoloToy brought Kumma back to its online store just one week after suspending sales. The fast return drew attention from parents and safety experts who wondered if the company had enough time to fix the serious issues identified in PIRG’s report.
FoloToy posted a detailed statement on X that laid out its version of what happened. In the post, the company said it viewed child safety as its “highest priority” and that it was “the only company to proactively suspend sales, not only of the product mentioned in the report, but also of our other AI toys.“ FoloToy said it took this action “immediately after the findings were published because we believe responsible action must come before commercial considerations.”
The company also emphasized to CyberGuy that it was the only one of the three AI toy startups in the PIRG review to suspend sales across all of its products and that it made this decision during the peak Christmas sales season, knowing the commercial impact would be significant. FoloToy told us, “Nevertheless, we moved forward decisively, because we believe that responsible action must always come before commercial interests.”
The company also said it took the report’s disturbing examples seriously. According to FoloToy, the issues were “directly addressed in our internal review.” It explained that the team “initiated a deep, company-wide internal safety audit,” then “strengthened and upgraded our content-moderation and child-safety safeguards,” and “deployed enhanced safety rules and protections through our cloud-based system.”
After outlining these steps, the company said it spent the week on “rigorous review, testing, and reinforcement of our safety modules.” It concluded its announcement by saying it “began gradually restoring product sales” as those updated safeguards went live.
FoloToy added that as global attention on AI toy risks grows, “transparency, responsibility and continuous improvement are essential,” and that the company “remains firmly committed to building safe, age-appropriate AI companions for children and families worldwide.”
LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION
Safety testers previously found the toy giving risky guidance about weapons, matches and adult content.
Why experts still question FoloToy’s AI toy safety fixes
PIRG researcher RJ Cross said her team plans to test the updated toys to see if the fixes hold up. She noted that a week feels fast for such significant changes, and only new tests will show if the product now behaves safely.
Parents will want to follow this closely as AI-powered toys grow more common. The speed of FoloToy’s relaunch raises questions about the depth of its review.
Tips for parents before buying AI toys
AI toys can feel exciting and helpful, but they can also surprise you with content you’d never expect. If you plan to bring an AI-powered toy into your home, these simple steps can help you stay in control.
1) Check which AI model the toy uses
Not every model follows the same guardrails. Some include stronger filters while others may respond too freely. Look for transparent disclosures about which model powers the toy and what safety features support it.
2) Read independent reviews
Groups like PIRG often test toys in ways parents cannot. These reviews flag hidden risks and point out behavior you may not catch during quick demos.
3) Set clear usage rules
Keep AI toys in shared spaces where you can hear or see how your child interacts with it. This helps you step in if the toy gives a concerning answer.
4) Test the toy yourself first
Ask the toy questions, try creative prompts, and see how it handles tricky topics. This lets you learn how it behaves before you hand it to your child.
5) Update the toy’s firmware
Many AI toys run on cloud systems. Updates often add stronger safeguards or reduce risky answers. Make sure the device stays current.
6) Check for a clear privacy policy
AI toys can gather voice data, location info, or behavioral patterns. A strong privacy policy should explain what is collected, how long it is stored, and who can access it.
7) Watch for sudden behavior changes
If an AI toy starts giving odd answers or pushes into areas that feel inappropriate, stop using it and report the problem to the manufacturer.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
AI toys can offer fun and learning, but they can also expose kids to unexpected risks. FoloToy says it improved Kumma’s safety, yet experts still want proof. Until the updated toy goes through independent testing, families may want to stay cautious.
Do you think AI toys can ever be fully safe for young kids? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Data centers in Oregon might be helping to drive an increase in cancer and miscarriages
Morrow County, Oregon is home to mega farms and food processing plants. But it’s also home to several Amazon data centers. And now, some experts believe, that combination is leading to an alarmingly high concentration of nitrates in the drinking water that is driving up cancer and miscarriage rates in the area.
Rolling Stone’s exposé details how Amazon, despite not using any dangerous nitrates to cool its data centers, is accelerating the contamination of the Lower Umatilla Basin aquifer, which residents rely on for drinking water. It’s a combination of poor wastewater management, sandy soil, and good old physics that has led to nitrate concentrations in drinking water as high as 73 ppm (parts per million) in some wells, which is 10 times the state limit of 7 ppm and seven times the federal limit.
According to Rolling Stone, “experts say Amazon’s arrival supercharged this process. The data centers suck up tens of millions of gallons of water from the aquifer each year to cool their computer equipment, which then gets funneled to the Port’s wastewater system.” The result is that more nitrate-laden wastewater gets pumped onto area farms. But the porous soil saturates quickly and more nitrates make their way into the aquifer.
This is exacerbated when Amazon then pulls this contaminated water, which is already over federal legal limits for nitrates, up to cool its data centers:
When that tainted water moves through the data centers to absorb heat from the server systems, some of the water is evaporated, but the nitrates remain, increasing the concentration. That means that when the polluted water has moved through the data centers and back into the wastewater system, it’s even more contaminated, sometimes averaging as high as 56 ppm, eight times Oregon’s safety limit.
Amazon, of course, disputes this narrative. Spokesperson Lisa Levandowski told Rolling Stone that, the story was “misleading and inaccurate,” and that, “the volume of water our facilities use and return represents only a very small fraction of the overall water system — not enough to have any meaningful impact on water quality.”
Levandowski also said that the area’s groundwater problems “significantly predate AWS’ (Amazon Web Services) presence.” Though, if Amazon was aware of the area’s challenges in securing enough safe drinking water for its residents, it raises questions about why the company hasn’t done more to mitigate its impact or why it even chose Morrow County in the first place.
The rise in nitrates in the drinking water has been linked to a surge in rare cancers and miscarriages. But efforts to limit further contamination and provide residents with safe, clean drinking water have been slow to materialize. The limited scope of the response and the fact that 40 percent of the county’s residents live below the poverty line has drawn comparisons to the crisis in Flint, Michigan. Kristin Ostrom, executive director of Oregon Rural Action (ORA), a water rights advocacy group, told Rolling Stone, “These are people who have no political or economic power, and very little knowledge of the risk.”
-
Science1 week agoWashington state resident dies of new H5N5 form of bird flu
-
Business6 days agoStruggling Six Flags names new CEO. What does that mean for Knott’s and Magic Mountain?
-
Politics4 days agoRep. Swalwell’s suit alleges abuse of power, adds to scrutiny of Trump official’s mortgage probes
-
Ohio5 days agoSnow set to surge across Northeast Ohio, threatening Thanksgiving travel
-
Southeast1 week agoAlabama teacher arrested, fired after alleged beating of son captured on camera
-
Technology4 days agoNew scam sends fake Microsoft 365 login pages
-
News5 days ago2 National Guard members wounded in ‘targeted’ attack in D.C., authorities say
-
World5 days agoTrump yanks G20 invitation from South Africa over false genocide claims