Madison Lawrence Tabbey was scrolling through X in late October when a post from a Wicked update account caught her attention. Ariana Grande, who stars in the movies as Glinda, had just liked a meme on Instagram about never wanting to see another AI-generated image again. Grande had also purportedly blocked a fan account that had made AI edits of her.
Technology
Fandoms are cashing in on AI deepfakes
As Tabbey read through the mostly sympathetic replies, a very different message caught her eye. It was from a fellow Grande fan whose profile was mostly AI edits, showing Grande with different hairstyles and outfits. And, their reply said, they weren’t going to stop. Tabbey, a 33-year-old living in Nashville, Tennessee, couldn’t help but start arguing with them. “Oh so you were SERIOUS when you said you don’t care about poor communities not having water so that you can make AI pictures of ariana grande?” she shot back, referencing data centers draining resources and polluting cities like nearby Memphis. The account fired back at first, but amid a swarm of angry responses, it deactivated a few days later. It seemed like the owner wanted to argue and make people mad, but they might have taken things too far.
Grande is one of many celebrities and influencers who have openly rejected AI media exploiting their likenesses, but who continue to be prominently featured in it anyway, even among people who call themselves fans. As AI images and videos become ever simpler to produce, celebrities are facing down a mix of unsettled social norms and the incentives of an internet attention economy. And on “stan Twitter,” where pop culture accounts have grown into a lucrative fan-made media ecosystem, AI content has emerged as a growing genre, despite — or maybe because of — the outrage it provokes.
“Stan Twitter is very against AI just in general. So this goes against what people believe in, so then they’ll instantly get a comment, they’ll have the AI people retweet it, like it. So it’s just a very quick way to get money,” said Brandon, a 25-year-old who runs a verified fan account for Grande with close to 25,000 followers.
Brandon spoke on the condition that his account name and his last name be withheld, fearing retaliation from other people on stan Twitter. (Grande’s fans have been known to harass people; in 2019 the pop star told one critic under siege that she apologized on her fans’ behalf, but couldn’t stop them.) He tells The Verge he’s against most AI media, but he did ask ChatGPT to rank Grande’s top 10 songs that weren’t released as singles. He compiled the results into a thread that got over 1,000 likes. That seemed morally okay to him, as opposed to making AI pictures of Grande — commonly known as deepfakes — or Grande-inspired AI songs.
Grande’s position on the latter is clear. In a February 2024 interview, she called it “terrifying” that people were posting AI-generated imitations of her covering songs by other artists like Sabrina Carpenter and Dua Lipa. The rebuke hasn’t stopped them, though. Searching “ariana grande ai cover” on X still pulls up plenty of AI songs, although some have been removed by X in response to reports made by the original songs’ copyright owners.
Even the musician Grimes, who in 2023 encouraged fans to create AI songs based on her voice, said in October that the experience of having her likeness co-opted by AI “felt really weird and really uncomfortable.” She’s now calling for “international treaties” to regulate deepfakes.
“It’s just a very quick way to get money”
Grimes’ more recent comments follow the launch of an app that dramatically escalated AI media proliferation: OpenAI’s Sora video generator. Sora is built around a feature called “Cameos,” which lets anyone offer up their likeness for other users to play with. Many of the results were predictably offensive, and once they’re online, they’re nearly impossible to remove.
Grimes was reacting to videos of influencer and boxer Jake Paul, whose Cameo is available on Sora. Paul, who is an OpenAI investor, was the face of the launch. He said AI videos of him generated by Sora were viewed more than a billion times in the first week. Some of the viral ones portrayed Paul as gay, relying on homophobic stereotypes as the joke. The same thing happened when a self-identified homophobic British influencer offered his likeness to Sora, then again to the YouTuber IShowSpeed.
Paul capitalized on the trend, filming a Celsius brand endorsement with a purposefully flamboyant affect, while the other men threatened defamation suits and attempted to shut down their Sora Cameos.
Sora has since added more granular controls for Cameos, and it technically allows their owners to delete videos they don’t like. But Sora videos are quickly ripped and posted to other platforms, where OpenAI can’t remove them. When IShowSpeed attempted to delete AI depictions of him coming out, he encountered the problem most victims of nonconsensual media run into: Maybe you can get one video taken down, but by that time, more have already cropped up elsewhere. And as Paul’s fiancée said in a video objecting to the Sora 2 videos of him coming out, “It’s not funny. People believe—” (Paul cut off the video there).
Alongside Paul, just a few other popular YouTubers, like Justine Ezarik (better known as iJustine), have promoted their own deepfakes made with Sora. In Ezarik’s case, most of her content relates to unboxing and sharing new tech industry products. Shark Tank host Mark Cuban offered up his likeness on Sora, too, which shocked SocialProof Security CEO Rachel Tobac, who told The Verge that scammers have already been tricking people with AI-generated Shark Tank endorsements. “I mean, there’s been an explosion of impersonation,” Tobac said.
“There’s been an explosion of impersonation”
But after teasing the Sora updates, Paul, Ezarik, and Cuban had all stopped posting about it and their deepfakes by the end of the month. Jeremy Carrasco, a video producer whose Instagram explainers about how to spot AI videos have netted him nearly a quarter of a million followers this year, said that most influencers he talks to aren’t interested in creating their own deepfakes—they’re more worried that people could accuse them of faking their content or that their fans could be scammed.
Deepfakes have shifted from something mainly created on seedy forums at the turn of the decade into one of the most accessible technologies today. Still, they have yet to take hold as an acceptable mainstream way for fans to engage with their favorite stars. Instead, when they go viral, it’s mostly offensive content.
“The normalization of deepfakes is something no one was asking for. It’s something that OpenAI did because it made their thing more viral and social,” Carrasco said. “Once you open that door to being okay with people deepfaking you, even if it’s your friends deepfaking you, all of a sudden your likeness has just gotten fucked. You’re no longer in control of it and you can’t pull it back.”
Image: Cath Virginia / The Verge, Getty Images
The reasonable fears around having your likeness exploited in AI media have understandably made celebrities a bit jumpy. That recently led to a tense moment between Criminal Minds star Paget Brewster and one of her favorite fan accounts on X, run by a 27-year-old film student named Mariah. Over the weekend, Mariah posted a brightened screenshot of a scene in an episode from years ago, one where Brewster’s character was taking a nap. Brewster saw Mariah’s post and replied “Um, babe, this is AI generated and kinda creepy. Please don’t make fake images of me? I thought we were friends. I’d like to stay friends.”
When Mariah saw Brewster’s reply, she gasped out loud. By the time she responded, other Criminal Minds fans had chimed in to let Brewster know that it wasn’t an AI-generated image. The actress, who is 56 and recently asked another fan what a “parody account” is, publicly and profusely apologized to Mariah.
“I’m so sorry! I thought it was fake and it freaked me out,” she wrote. “I feel terrible I thought you made something in AI. I hope you’ll forgive me.” Mariah did. As someone in a creative field, she said she would never use AI. She’s been dismayed to see it emerge in fandom spaces, generating the kind of fanart and fan edits that used to be hand-drawn and arranged with care. Some celebrities have long been uncomfortable with things like erotic fanart and fanfiction or been subject to harassment or other boundary violations. But AI, even when it’s not overtly sexual, feels like it crosses a new line.
“But that pushback does give them more engagement and they almost don’t care. They almost want to do it more, because it’s causing people to be upset,” Mariah said.
“They almost want to do it more, because it’s causing people to be upset.”
AI content can appear on nearly any platform, but the stronger the incentive to farm engagement, the more heated the fights over it get. Since late 2024, X users who pay to be verified, like the owner of the Grande AI edits account, can earn money by getting engagement on their posts from other verified users. That makes it a particularly easy place for stan accounts to turn discourse into dollars.
“In the last couple years there’s been a massive uptick in ragebaiting in general just to farm engagement” on X, Tabbey said in a phone interview. “And I know there’s a big market for it, especially in fandoms, because we’re real people. We care about musicians and their art.”
Stans using AI or otherwise deceptively edited media to bait other stans into engagement on X also has the knock-on effect of potentially spreading disinformation and harming the reputations of their favorite artists. In late October, a Grande stan account with nearly 40,000 X followers that traffics in crude edits — their last nine posts have all been images of Grande with slain podcaster Charlie Kirk’s face superimposed over hers, which has become a popular AI meme format — posted images of Grande wearing a T-shirt with text that says “Treat your girl right.” “I wonder why these photos are kept unreleased..” they captioned their post. Another Grande stan quoted them and wrote “Oh girl we ALL know why,” referencing Grande’s controversial (alleged) history of dating men who are already in relationships. The post has 6 million views.
At first glance, nothing looks out of the ordinary. But zooming in on the images and reading the replies reveals that the T-shirt was edited to say “Treat your girl right.” It originally featured a simple smiley face design with no text. And upon close inspection, the letters in the edited version are oddly compressed, wavy, and appear at a slightly different resolution than the rest of the image—these are indicators, often called “artifacts” by AI researchers, that something was AI-generated.
“I probably should’ve deleted this tweet a while ago,” wrote Trace, the 18-year-old Grande stan behind the viral quote tweet (not the original edited images) in a DM. He wrote that he didn’t know whether the image was edited with AI or something else, but that it goes to show that AI “can influence people to believe things that are harmful or aren’t true about a celeb.”
AI using celebrity likenesses can also be weaponized more directly as a form of sexual harassment. Trace wrote that he’s seen “sinister” AI media of Grande floating around stan Twitter, like sexually explicit deepfakes and images that are meant to imitate semen on her face — which is something that X’s built-in AI service Grok was doing to women’s selfies to the tune of tens of millions of views over the summer, until one influencer started publicly seeking legal advice. Trace wrote that it “truly disturbs” him to see AI used in this context, and that he’s seen it done to Taylor Swift, Lady Gaga, Beyoncé, and many more celebrities. Some deepfake creators have even successfully monetized this kind of nonconsensual content, despite it provoking widespread outrage among the general public.
Back in January 2024, X disabled searches for “Taylor Swift” and “Taylor Swift AI” after a series of images portraying her likeness in sexually suggestive and violent scenarios went viral. It didn’t stop the spread of the images, which were also posted on other social media platforms, but some stans partook in a mass-reporting campaign to get the material removed. They linked up with feminists on X to do it, including a 28-year-old named Chelsea who helped direct group chats into action. X didn’t respond to a request for comment.
The viral Swift deepfakes even prompted federal legislative efforts around giving victims of nonconsensual deepfakes more tools to take them down—some of which culminated in the aptly named Take It Down Act, which requires platforms to quickly remove reported content. Some students who have deepfaked their underage classmates have even been arrested. But that’s not the norm, and critics of Take It Down have pointed out that it can facilitate censorship without necessarily helping victims.
“It’s like this weird sense of control”
For years, celebrity women have been on the front lines of this issue. Scarlett Johansson has been outspoken on it since 2018, when she referred to combating deepfakes as a “useless pursuit, legally.” Jenna Ortega deactivated her Twitter account in 2023 after she said she repeatedly encountered sexually explicit deepfakes created out of her childhood photos.
And since the Swift incident, Chelsea has only observed a greater normalization of AI and sexual violence against famous women.
“I’ve seen so many people have the excuse, ‘Well if they didn’t want it, they shouldn’t have become famous,’” she said in a phone interview. “It’s like this weird sense of control that they’re able to do this, even if the person wouldn’t want them to, they know they can. It’s this power-hungry thing.”

Image: Cath Virginia / The Verge, Getty Images
One way that fans can puppeteer a version of their idol is with a customizable AI chatbot. Lots of platforms provide the ability to create your own AI character, some of the biggest being Instagram and Facebook. In 2023, Meta tried out an AI chatbot collaboration with celebrities like Kendall Jenner and Snoop Dogg, but it didn’t catch on. In 2024, it introduced user-generated chatbots. The feature is tucked away deep in the DMs function, but millions of messages have already been traded with user-designed characters like “Fortune Teller” and “Rich but strict parents.” Meta’s rules technically don’t allow users to create characters based on living people without their permission, but users can still do it as long as they designate them as “parody” accounts. Users have been getting away with making and conversing with chatbots based on Grande, Swift, the YouTuber MrBeast, Donald Trump, Elon Musk, Jesus (religious figures aren’t allowed either), and everyone in between since the beginning. Searching “Ariana Grande” pulls up 10 results for chatbots clearly imitating her right away.
Most of the accounts that created the chatbots didn’t respond to requests for comment. But one did. She identified herself as an 11-year-old girl in India who is about to turn 12 and loves Grande and singing. Photos on the account appeared to corroborate this. Children under 13 aren’t supposed to be able to make Instagram accounts at all, and children under 18 aren’t supposed to be able to make AI chatbots. At least one of the other Grande chatbot creators appeared to be a young person in India based on photos and locations tagged from their account. Another was created by a page for a “kid influencer” with fewer than 1,000 followers. In addition to Grande, his page had created 185 other AI chatbots depicting celebrities like Wendy Williams, Keke Palmer, Will Smith, and bizarrely, Bill Cosby. The adults listed as managing the account didn’t respond to requests for comment, either.
The 11-year-old girl’s Grande chatbot opened the conversation by offering an interior design makeover. The Grande bot then asked if the vibe should be “sultry, feminine, or sleek?” When asked what “sultry vibes” means, the bot answered “Think velvet, lace, and soft lighting — like my music videos. Does that turn you on?”
Meta removed the accounts belonging to the 11-year-old and the “kid influencer” after The Verge reached out for comment on them, removing their AI chatbot creations in the process, too.
Many of the user-generated AI chatbots imitating female celebrities on Instagram will automatically direct users into flirty conversations, although the bots tend to redirect or stop responding to conversations that turn overtly sexual. Some influencers, like the Twitch streamer and OnlyFans performer Amouranth, have leveraged this to market their AI selves as NSFW chatbots on other sites. Platforms like Joi AI have partnered with adult stars to provide AI “twins” for fans to make AI media and chat with. But the Meta chatbots aren’t making their creators money—just Meta. The lure for users involves other, more psychological incentives.
“If you’re in an agreement bubble, you’re more likely to stick around”
“The reason it turns flirty or sycophantic is because if you’re in an agreement bubble, you’re more likely to stick around,” said Jamie Cohen, an associate professor of media studies at Queens College, City University of New York who has taught classes about AI. “Women influencers, their entity identity, once placed inside the machine, becomes the dataset. And once that dataset mixes and merges with the inherent misogyny or biases built in, it really loses its control regardless of how much the human behind it allows that type of latitude.”
For women who are interested in merging their identities with AI, sexualization is part of the package. For some, like the artist Arvida Byström, who has partnered with Joi AI to offer a chatbot of herself, that’s exciting—in part because she said technology often advances in the quest for pornography. But other women, like Chelsea, are scared of what this means for women and girls. If AI output is inherently biased toward sexualizing the female form, then it’s inherently exploitative.
When creating a female AI chatbot as a Meta user, you get to select personality traits like “playful,” “sassy,” “empathetic,” and “affectionate.” You can assign a chatbot based on “Ariana Grande” (the open-ended prompt part of the creation process doesn’t stop you) to the role of “friend,” “teacher,” “creative partner,” or anything else. And then you can edit, upload, or create an image based on the singer and select how the bot begins conversations.
But despite these user-selected variations, the Grande chatbots also tend to get repetitive, looping back to a generic script and answering questions in a similar way from bot to bot. For example, the 11-year-old’s chatbot talked about “soft lighting” in a “virtual bedroom,” while a different Grande chatbot suggested “We’d cuddle up and watch the stars twinkling through my skylight” and a third Grande chatbot said “*sweeps you into a romantic virtual bedroom*” with “candles lit.” The Grande chatbots were differentiated from the more generic girlfriend chatbots with sudden references to Grande songs—one said “‘Supernatural’ by me is on softly,” and another said “my heart would be racing like the drumbeat in ‘7 rings’ — would you kiss me back?”
“Generative AI averages everything else, so it’s the most likely outcome, so it’s the most boring and banal conversations,” Cohen said. “But it does work, because of the imagination of the user. It mimics the idea of parasociality, but with control.”
When Tabbey started arguing with the Grande stan making AI edits, she had her own age and experience with fandom in mind. Tabbey felt like she lived through a reckoning with early 2000s tabloid culture and a pushback against invasive celebrity surveillance to what now feels like history repeating itself. She worries that younger generations of fans are growing up with a dehumanizing view of celebrities as 2-D playthings instead of real-life people. She and Mariah have both noticed that younger stans are less resistant to making and using AI likenesses of their faves.
“We as Ariana Grande fans who are in our late 20s, early 30s, need to have some sort of responsibility. Someone needs to be the adult in these situations and in these conversations,” she said. “We had so much that we were making strides with when it came to boundaries being set with celebrities and them being able to assert their autonomy over their own selves and lives and privacy. I think that we’re actively being set back in many ways.”
Technology
The future of local TV news has taken a Trumpian turn
This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more stories on Big Tech versus politics in Washington, DC, follow Tina Nguyen and read Regulator. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.
A long time ago, in 2004, the Federal Communications Commission laid down a rule designed to prevent a monopoly: No one company could broadcast to more than 39 percent of all the TV households in the United States. But then Donald Trump returned to the White House in 2025. Brendan Carr became FCC chairman and immediately kicked off a deregulatory initiative called “Delete, Delete, Delete,” in which Carr vowed to get rid of “every rule, regulation, or guidance document” that placed “unnecessary regulatory burdens” on companies. And within months, Nexstar, which already owned over 200 stations nationwide and had hit its ownership cap, announced that it had entered an agreement to purchase its rival, Tegna, for an estimated $6.2 billion — something that could only happen, however, if Carr agreed to change the FCC’s rules.
If you ask Nexstar why it’s pursuing a merger that would give it control of over 80 percent of the market, it’d point to Big Tech as the culprit. As advertisers take their money to Netflix, YouTube, and other digital streamers, linear television — the local television news, the broadcast affiliates, the basic cable networks — has suffered, forcing them to consolidate and shut down newsrooms. In that sense, Nexstar argued, the merger would help it compete for ad revenue with the streaming services, thereby building more robust local journalism. However, the merger’s opponents believe that this is a basic violation of antitrust laws and principles — not to mention the danger of letting one company have editorial control over the vast majority of America’s local television newsrooms.
But the second Trump administration handles regulatory hurdles a little differently than others, and companies have found that it’s faster to get what they want if they bypass the agencies and talk (read: suck up) to Trump directly. And when Nexstar did so publicly, it confirmed its opponents’ fears about political influence. Last September, in the fraught weeks after the fatal shooting of Charlie Kirk, Nexstar announced it would no longer broadcast Jimmy Kimmel Live! — a response to Carr’s claim that the FCC could revoke the broadcast licenses of TV stations that aired the comedian’s comments related to Kirk. It briefly led to ABC suspending Kimmel’s show, though ABC and Nexstar soon reversed their decision after a massive nationwide backlash and an ABC boycott.
However, Nexstar’s loyalty to Trump himself was not enough to win over his most powerful MAGA supporters. Newsmax, a cable news network with a deeply pro-Trump bent, and its CEO, longtime Trump donor and outside adviser Chris Ruddy, filed a lawsuit objecting to the merger, claiming that Nexstar’s anticompetitive behavior would force channels like his off the air with steeper carriage fees. He specifically accused Nexstar of jacking up the fees for stations to carry Newsmax, while offering its similar network, NewsNation, for much cheaper.
The Nexstar-Tegna MAGA makeover then took a more subtle turn. NewsNation hired the pro-Trump Fox News commentator Katie Pavlich and gave her her own primetime show. (The network had already hired a slew of former Fox journalists as well.) Around this time, a political group called Keep News Local began airing ads in DC that seemed to directly address Trump, praising him for having “defeated the fake news monopolies before through independent voices and local news” and claiming that the Nexstar-Tegna merger was “crucial for MAGA to survive.” (A little self-contradictory and mildly illogical, but it’s the kind of stuff that Trump likes to hear.) When I last spoke to Ruddy in February, I asked if he’d worried that the dark money going into Keep News Local would sway Trump, and he chose his words carefully: “I think at the end of the day, Trump makes up his own mind. I’m not sure he’s going to be influenced by an ad campaign.”
For months, no one could accurately predict if Trump would override Carr’s wishes and bless the deal, as he’s often done for other companies facing regulatory scrutiny. Trump’s Truth Social posts about the merger have been a good indicator of how precarious the merger has been and who’s been able to influence him at any given moment: Last November, he blasted the deal as an “EXPANSION OF THE FAKE NEWS NETWORKS,” but by February, he posted that the deal would “help knock out the Fake News because there will be more competition.”
Several current and former NewsNation employees told Status at the time that they feared that the parent company was steering NewsNation away from the centrist, “unbiased” reputation they’d long cultivated. “A lot of people within the network believe that the network has gone hard right to appeal to Trump and Brendan Carr,” one former employee told Status. Coincidentally, days before the deal was finalized, NewsNation began ramping up its explicitly pro-Trump content, tweeting a clip of CNN’s Kaitlan Collins being berated by White House press secretary Karoline Leavitt, along with the comment “Just going to leave this here.”
When Trump greenlit the merger in mid-March, but before the FCC’s three commissioners could vote on whether to waive the ownership cap, Nexstar and Tegna immediately announced a new complication: Tegna and Nexstar had already started merging. Tegna was no more and CEO Mike Steib had already sold $22.6 million of his company stock.
In response, eight state attorneys general and satellite TV operator DirectTV, which had already been planning to file separate federal antitrust suits against the merger, asked US District Judge Troy Nunley in Sacramento for an emergency restraining order that would prevent Nexstar from taking over Tegna’s assets. The order was granted on March 27th and on April 17, Nunley issued a formal injunction, ruling that Tegna must be operated as an independent financial entity, and Nexstar must take steps to ensure it remains separate from Tegna before further legal proceedings.
For now, Nunley has allowed the states and DirecTV to combine their cases, in which both argue that the merger was a clear violation of antitrust laws and would crush news competition.
Meanwhile, Republicans and Democrats in Congress are furious at Carr. On March 30th, Sens. Ted Cruz (R-TX) and Maria Cantwell (D-WA) sent the chairman a joint letter admonishing him for allowing his staff to waive the regulations to let the merger pass, instead of having the full commission of political appointees — one from the Biden administration — vote on it. “Under these circumstances,” they wrote, “any subsequent vote risks being largely procedural rather than a genuine exercise of commission responsibility.” They also pointed out that their hasty approval without the commission’s approval would now complicate the merger financially: “In a transaction of this scale, where integration proceeds quickly and unwinding becomes impractical, delay in judicial review can insulate the decision from meaningful challenge.” Notably, though they share similar ideological views on the media and deregulation, Cruz and Carr have frequently clashed over how to achieve their objectives. Cruz previously slammed Carr as a “mafioso,” for instance, for the way he’d used the FCC to silence Kimmel.
But even if it’s legally paused, the journalistic merger’s fallout has started to hit local news. NPR’s David Folkenfirk reported on Tuesday that Tegna journalists had already started receiving orders to stop broadcasting content from major broadcasters like ABC, CBS, and NBC — media outlets being targeted by Carr — and instead begin airing content from Nexstar’s NewsNation.
- Brendan Carr’s views on using the FCC to punish major broadcasters was outlined pretty extensively in the chapter he authored in Project 2025, an initiative led by the conservative Heritage Foundation on how to reform the federal bureaucracy to be more favorable to the American right.
- Exactly how much is local television losing to digital? According to industry publication NewscastStudio, in an investor call defending the purchase, Nexstar chairman Perry Sook cited a market research study from Borrell Associates, which found that “digital advertising in local markets exceeds $100 billion, compared to just $25 billion for local linear television advertising, with nearly two-thirds of digital ad dollars flowing to five major technology companies.”
- If you want to see exactly how much Keep Local News was trying to suck up to Trump, the ads are archived here.
- The Vergecast has a long-running segment called “Brendan Carr is a dummy.”
- The LA Times reported on last week’s preliminary hearings in front of Nunley, and how lawyers for Nexstar, the states, and DirecTV plan to argue their case.
- The Desk has insights from Kirk Varner, a former TV newsroom director, on how the case could go.
- Andrew Liptak covered Nexstar’s previous acquisition sprees for The Verge in 2018.
- Adi Robertson walks through exactly how the Kimmel suspension was an attack on free speech.
- Brendan Carr keeps trying to convince people that he’s not threatening to suspend broadcast licenses for reporting on unfavorable things like the Iran war, reports Lauren Feiner.
- The Vergecast has a long-running segment called “Brendan Carr is a dummy.”
Technology
Chinese robot breaks human world record in Beijing half-marathon
NEWYou can now listen to Fox News articles!
A Chinese-built humanoid robot beat the human half-marathon world record in Beijing on Sunday, marking a breakthrough moment in a high-stakes global race for technological dominance.
A robot developed by Chinese smartphone maker Honor completed the 21-kilometer (13-mile) race in 50 minutes and 26 seconds, beating the human record of about 57 minutes set by Uganda’s Jacob Kiplimo last month.
The performance marked a dramatic improvement from last year’s inaugural event, when the top robot finished in more than 2 hours and 40 minutes.
Dozens of humanoid robots competed alongside about 12,000 human runners, navigating a parallel course to avoid collisions.
CHINA’S COMPACT HUMANOID ROBOT SHOWS OFF BALANCE AND FLIPS
A robot crosses the finish line in the Beijing E-Town Half Marathon and Humanoid Robot Half-Marathon held in the outskirts of Beijing on April 19, 2026. (Andy Wong/AP)
Nearly half of the robots ran using autonomous navigation, while others relied on remote control, organizers said.
Despite the breakthrough, the race still saw glitches, with some robots stumbling at the start or veering into barriers.
Engineers said the winning robot was designed to mimic elite athletes, featuring long legs of about 37 inches and advanced cooling systems to sustain performance.
US TARGETS CHINESE ROBOTS OVER SECURITY FEARS
“Looking ahead, some of these technologies might be transferred to other areas,” said Du Xiaodi, an engineer with the Honor team. “For example, structural reliability and liquid-cooling technology could be applied in future industrial scenarios.”
Team members celebrate next to the winning Honor Lightning humanoid robot during a medal ceremony after the second Beijing E-Town Half Marathon and Humanoid Robot Half Marathon in Beijing, China, on April 19, 2026. (Maxim Shemetov/Reuters)
Spectators reacted with a mix of amazement and unease at the machines’ rapid progress.
“It’s the first time robots have surpassed humans, and that’s something I never imagined,” Sun Zhigang, who attended the event with his son, told The Associated Press.
HUMANOID ROBOTS HIT MASS PRODUCTION IN CHINA
“The robots’ speed far exceeds that of humans,” spectator Wang Wen told the outlet. “This may signal the arrival of sort of a new era.”
A robot starts alongside human runners at the Beijing E-Town Half Marathon and Humanoid Half Marathon on the outskirts of Beijing on April 19, 2026. (Ng Han Guan/AP)
Experts say the race highlights China’s accelerating push to dominate robotics and artificial intelligence, even as widespread commercial use of humanoid robots remains limited, according to Reuters. The experts said Chinese robotics firms are still working to develop the AI software needed for humanoids to match the efficiency of human factory workers.
Runners take pictures of a humanoid robot during the second Beijing E-Town Half Marathon and Humanoid Robot Half Marathon in Beijing on April 19, 2026. (Haruna Furuhashi/Pool Photo via AP)
“The future will definitely be an AI era,” engineering student Chu Tianqi told Reuters. “If people don’t know how to use AI now … they will definitely become obsolete.”
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
The competition underscores a broader technological race between China and the United States, as Beijing invests heavily in advanced robotics as part of its long-term economic strategy.
The Associated Press and Reuters contributed to this report.
Technology
The RAM shortage could last years
According to Nikkei Asia, even as suppliers ramp up DRAM production, manufacturers are only expected to meet 60 percent of demand by the end of 2027. SK Group chairman has even said that shortages could last until 2030.
The world’s largest memory makers — Samsung, SK Hynix, and Micron — are all working to add new fabrication capacity, but almost none of it will be online until at least 2027, if not 2028. SK opened a fab in Cheongju in February, but that is the only increase in production among the three for 2026.
Nikkei says that production would need to increase by 12 percent a year in 2026 and 2027 to meet demand. But according to Counterpoint Research, an increase of only 7.5 percent is planned.
The new facilities will primarily focus on producing high-bandwidth memory (HBM), which is used in AI data centers. With the companies already prioritizing HBM over general-purpose DRAM used in computers and phones, it’s not clear how much these new fabs will help alleviate the price crunch facing consumer electronics. Everything from phones and laptops, to VR headsets and gaming handhelds have seen price increases due to the RAM shortage.
-
Milwaukee, WI50 seconds ago
One person injured following early Sunday morning shooting in Milwaukee
-
Atlanta, GA7 minutes agoPlay Fair ATL kicks off ‘The People’s Cup’ in Candler Park
-
Minneapolis, MN13 minutes agoBetween Minneapolis And Lake Superior Is The ‘Agate Capital Of The World’ With Cozy Charm And A State Park – Islands
-
Indianapolis, IN19 minutes ago1 dead after shooting on Indy’s near south side
-
Pittsburg, PA25 minutes agoGame #22: Tampa Bay Rays vs. Pittsburgh Pirates
-
Augusta, GA31 minutes agoWhat is the cheapest city in Georgia to live with a roomate?
-
Washington, D.C37 minutes ago12th Honor Flight Tallahassee returns home from successful trip to Washington D.C.
-
Cleveland, OH43 minutes agoSupercross: Results From Cleveland, OH