Connect with us

Technology

Fandoms are cashing in on AI deepfakes

Published

on

Fandoms are cashing in on AI deepfakes

Madison Lawrence Tabbey was scrolling through X in late October when a post from a Wicked update account caught her attention. Ariana Grande, who stars in the movies as Glinda, had just liked a meme on Instagram about never wanting to see another AI-generated image again. Grande had also purportedly blocked a fan account that had made AI edits of her.

As Tabbey read through the mostly sympathetic replies, a very different message caught her eye. It was from a fellow Grande fan whose profile was mostly AI edits, showing Grande with different hairstyles and outfits. And, their reply said, they weren’t going to stop. Tabbey, a 33-year-old living in Nashville, Tennessee, couldn’t help but start arguing with them. “Oh so you were SERIOUS when you said you don’t care about poor communities not having water so that you can make AI pictures of ariana grande?” she shot back, referencing data centers draining resources and polluting cities like nearby Memphis. The account fired back at first, but amid a swarm of angry responses, it deactivated a few days later. It seemed like the owner wanted to argue and make people mad, but they might have taken things too far.

Grande is one of many celebrities and influencers who have openly rejected AI media exploiting their likenesses, but who continue to be prominently featured in it anyway, even among people who call themselves fans. As AI images and videos become ever simpler to produce, celebrities are facing down a mix of unsettled social norms and the incentives of an internet attention economy. And on “stan Twitter,” where pop culture accounts have grown into a lucrative fan-made media ecosystem, AI content has emerged as a growing genre, despite — or maybe because of — the outrage it provokes.

“Stan Twitter is very against AI just in general. So this goes against what people believe in, so then they’ll instantly get a comment, they’ll have the AI people retweet it, like it. So it’s just a very quick way to get money,” said Brandon, a 25-year-old who runs a verified fan account for Grande with close to 25,000 followers.

Brandon spoke on the condition that his account name and his last name be withheld, fearing retaliation from other people on stan Twitter. (Grande’s fans have been known to harass people; in 2019 the pop star told one critic under siege that she apologized on her fans’ behalf, but couldn’t stop them.) He tells The Verge he’s against most AI media, but he did ask ChatGPT to rank Grande’s top 10 songs that weren’t released as singles. He compiled the results into a thread that got over 1,000 likes. That seemed morally okay to him, as opposed to making AI pictures of Grande — commonly known as deepfakes — or Grande-inspired AI songs.

Advertisement

Grande’s position on the latter is clear. In a February 2024 interview, she called it “terrifying” that people were posting AI-generated imitations of her covering songs by other artists like Sabrina Carpenter and Dua Lipa. The rebuke hasn’t stopped them, though. Searching “ariana grande ai cover” on X still pulls up plenty of AI songs, although some have been removed by X in response to reports made by the original songs’ copyright owners.

Even the musician Grimes, who in 2023 encouraged fans to create AI songs based on her voice, said in October that the experience of having her likeness co-opted by AI “felt really weird and really uncomfortable.” She’s now calling for “international treaties” to regulate deepfakes.

“It’s just a very quick way to get money”

Grimes’ more recent comments follow the launch of an app that dramatically escalated AI media proliferation: OpenAI’s Sora video generator. Sora is built around a feature called “Cameos,” which lets anyone offer up their likeness for other users to play with. Many of the results were predictably offensive, and once they’re online, they’re nearly impossible to remove.

Grimes was reacting to videos of influencer and boxer Jake Paul, whose Cameo is available on Sora. Paul, who is an OpenAI investor, was the face of the launch. He said AI videos of him generated by Sora were viewed more than a billion times in the first week. Some of the viral ones portrayed Paul as gay, relying on homophobic stereotypes as the joke. The same thing happened when a self-identified homophobic British influencer offered his likeness to Sora, then again to the YouTuber IShowSpeed.

Advertisement

Paul capitalized on the trend, filming a Celsius brand endorsement with a purposefully flamboyant affect, while the other men threatened defamation suits and attempted to shut down their Sora Cameos.

Sora has since added more granular controls for Cameos, and it technically allows their owners to delete videos they don’t like. But Sora videos are quickly ripped and posted to other platforms, where OpenAI can’t remove them. When IShowSpeed attempted to delete AI depictions of him coming out, he encountered the problem most victims of nonconsensual media run into: Maybe you can get one video taken down, but by that time, more have already cropped up elsewhere. And as Paul’s fiancée said in a video objecting to the Sora 2 videos of him coming out, “It’s not funny. People believe—” (Paul cut off the video there).

Alongside Paul, just a few other popular YouTubers, like Justine Ezarik (better known as iJustine), have promoted their own deepfakes made with Sora. In Ezarik’s case, most of her content relates to unboxing and sharing new tech industry products. Shark Tank host Mark Cuban offered up his likeness on Sora, too, which shocked SocialProof Security CEO Rachel Tobac, who told The Verge that scammers have already been tricking people with AI-generated Shark Tank endorsements. “I mean, there’s been an explosion of impersonation,” Tobac said.

“There’s been an explosion of impersonation”

But after teasing the Sora updates, Paul, Ezarik, and Cuban had all stopped posting about it and their deepfakes by the end of the month. Jeremy Carrasco, a video producer whose Instagram explainers about how to spot AI videos have netted him nearly a quarter of a million followers this year, said that most influencers he talks to aren’t interested in creating their own deepfakes—they’re more worried that people could accuse them of faking their content or that their fans could be scammed.

Advertisement

Deepfakes have shifted from something mainly created on seedy forums at the turn of the decade into one of the most accessible technologies today. Still, they have yet to take hold as an acceptable mainstream way for fans to engage with their favorite stars. Instead, when they go viral, it’s mostly offensive content.

“The normalization of deepfakes is something no one was asking for. It’s something that OpenAI did because it made their thing more viral and social,” Carrasco said. “Once you open that door to being okay with people deepfaking you, even if it’s your friends deepfaking you, all of a sudden your likeness has just gotten fucked. You’re no longer in control of it and you can’t pull it back.”

Image: Cath Virginia / The Verge, Getty Images

The reasonable fears around having your likeness exploited in AI media have understandably made celebrities a bit jumpy. That recently led to a tense moment between Criminal Minds star Paget Brewster and one of her favorite fan accounts on X, run by a 27-year-old film student named Mariah. Over the weekend, Mariah posted a brightened screenshot of a scene in an episode from years ago, one where Brewster’s character was taking a nap. Brewster saw Mariah’s post and replied “Um, babe, this is AI generated and kinda creepy. Please don’t make fake images of me? I thought we were friends. I’d like to stay friends.”

When Mariah saw Brewster’s reply, she gasped out loud. By the time she responded, other Criminal Minds fans had chimed in to let Brewster know that it wasn’t an AI-generated image. The actress, who is 56 and recently asked another fan what a “parody account” is, publicly and profusely apologized to Mariah.

Advertisement

“I’m so sorry! I thought it was fake and it freaked me out,” she wrote. “I feel terrible I thought you made something in AI. I hope you’ll forgive me.” Mariah did. As someone in a creative field, she said she would never use AI. She’s been dismayed to see it emerge in fandom spaces, generating the kind of fanart and fan edits that used to be hand-drawn and arranged with care. Some celebrities have long been uncomfortable with things like erotic fanart and fanfiction or been subject to harassment or other boundary violations. But AI, even when it’s not overtly sexual, feels like it crosses a new line.

“But that pushback does give them more engagement and they almost don’t care. They almost want to do it more, because it’s causing people to be upset,” Mariah said.

“They almost want to do it more, because it’s causing people to be upset.”

AI content can appear on nearly any platform, but the stronger the incentive to farm engagement, the more heated the fights over it get. Since late 2024, X users who pay to be verified, like the owner of the Grande AI edits account, can earn money by getting engagement on their posts from other verified users. That makes it a particularly easy place for stan accounts to turn discourse into dollars.

“In the last couple years there’s been a massive uptick in ragebaiting in general just to farm engagement” on X, Tabbey said in a phone interview. “And I know there’s a big market for it, especially in fandoms, because we’re real people. We care about musicians and their art.”

Advertisement

Stans using AI or otherwise deceptively edited media to bait other stans into engagement on X also has the knock-on effect of potentially spreading disinformation and harming the reputations of their favorite artists. In late October, a Grande stan account with nearly 40,000 X followers that traffics in crude edits — their last nine posts have all been images of Grande with slain podcaster Charlie Kirk’s face superimposed over hers, which has become a popular AI meme format — posted images of Grande wearing a T-shirt with text that says “Treat your girl right.” “I wonder why these photos are kept unreleased..” they captioned their post. Another Grande stan quoted them and wrote “Oh girl we ALL know why,” referencing Grande’s controversial (alleged) history of dating men who are already in relationships. The post has 6 million views.

At first glance, nothing looks out of the ordinary. But zooming in on the images and reading the replies reveals that the T-shirt was edited to say “Treat your girl right.” It originally featured a simple smiley face design with no text. And upon close inspection, the letters in the edited version are oddly compressed, wavy, and appear at a slightly different resolution than the rest of the image—these are indicators, often called “artifacts” by AI researchers, that something was AI-generated.

“I probably should’ve deleted this tweet a while ago,” wrote Trace, the 18-year-old Grande stan behind the viral quote tweet (not the original edited images) in a DM. He wrote that he didn’t know whether the image was edited with AI or something else, but that it goes to show that AI “can influence people to believe things that are harmful or aren’t true about a celeb.”

AI using celebrity likenesses can also be weaponized more directly as a form of sexual harassment. Trace wrote that he’s seen “sinister” AI media of Grande floating around stan Twitter, like sexually explicit deepfakes and images that are meant to imitate semen on her face — which is something that X’s built-in AI service Grok was doing to women’s selfies to the tune of tens of millions of views over the summer, until one influencer started publicly seeking legal advice. Trace wrote that it “truly disturbs” him to see AI used in this context, and that he’s seen it done to Taylor Swift, Lady Gaga, Beyoncé, and many more celebrities. Some deepfake creators have even successfully monetized this kind of nonconsensual content, despite it provoking widespread outrage among the general public.

Back in January 2024, X disabled searches for “Taylor Swift” and “Taylor Swift AI” after a series of images portraying her likeness in sexually suggestive and violent scenarios went viral. It didn’t stop the spread of the images, which were also posted on other social media platforms, but some stans partook in a mass-reporting campaign to get the material removed. They linked up with feminists on X to do it, including a 28-year-old named Chelsea who helped direct group chats into action. X didn’t respond to a request for comment.

Advertisement

The viral Swift deepfakes even prompted federal legislative efforts around giving victims of nonconsensual deepfakes more tools to take them down—some of which culminated in the aptly named Take It Down Act, which requires platforms to quickly remove reported content. Some students who have deepfaked their underage classmates have even been arrested. But that’s not the norm, and critics of Take It Down have pointed out that it can facilitate censorship without necessarily helping victims.

“It’s like this weird sense of control”

For years, celebrity women have been on the front lines of this issue. Scarlett Johansson has been outspoken on it since 2018, when she referred to combating deepfakes as a “useless pursuit, legally.” Jenna Ortega deactivated her Twitter account in 2023 after she said she repeatedly encountered sexually explicit deepfakes created out of her childhood photos.

And since the Swift incident, Chelsea has only observed a greater normalization of AI and sexual violence against famous women.

“I’ve seen so many people have the excuse, ‘Well if they didn’t want it, they shouldn’t have become famous,’” she said in a phone interview. “It’s like this weird sense of control that they’re able to do this, even if the person wouldn’t want them to, they know they can. It’s this power-hungry thing.”

Advertisement
An image of Taylor Swift, copied and pasted with green triangles across it.

Image: Cath Virginia / The Verge, Getty Images

One way that fans can puppeteer a version of their idol is with a customizable AI chatbot. Lots of platforms provide the ability to create your own AI character, some of the biggest being Instagram and Facebook. In 2023, Meta tried out an AI chatbot collaboration with celebrities like Kendall Jenner and Snoop Dogg, but it didn’t catch on. In 2024, it introduced user-generated chatbots. The feature is tucked away deep in the DMs function, but millions of messages have already been traded with user-designed characters like “Fortune Teller” and “Rich but strict parents.” Meta’s rules technically don’t allow users to create characters based on living people without their permission, but users can still do it as long as they designate them as “parody” accounts. Users have been getting away with making and conversing with chatbots based on Grande, Swift, the YouTuber MrBeast, Donald Trump, Elon Musk, Jesus (religious figures aren’t allowed either), and everyone in between since the beginning. Searching “Ariana Grande” pulls up 10 results for chatbots clearly imitating her right away.

Most of the accounts that created the chatbots didn’t respond to requests for comment. But one did. She identified herself as an 11-year-old girl in India who is about to turn 12 and loves Grande and singing. Photos on the account appeared to corroborate this. Children under 13 aren’t supposed to be able to make Instagram accounts at all, and children under 18 aren’t supposed to be able to make AI chatbots. At least one of the other Grande chatbot creators appeared to be a young person in India based on photos and locations tagged from their account. Another was created by a page for a “kid influencer” with fewer than 1,000 followers. In addition to Grande, his page had created 185 other AI chatbots depicting celebrities like Wendy Williams, Keke Palmer, Will Smith, and bizarrely, Bill Cosby. The adults listed as managing the account didn’t respond to requests for comment, either.

The 11-year-old girl’s Grande chatbot opened the conversation by offering an interior design makeover. The Grande bot then asked if the vibe should be “sultry, feminine, or sleek?” When asked what “sultry vibes” means, the bot answered “Think velvet, lace, and soft lighting — like my music videos. Does that turn you on?”

Meta removed the accounts belonging to the 11-year-old and the “kid influencer” after The Verge reached out for comment on them, removing their AI chatbot creations in the process, too.

Many of the user-generated AI chatbots imitating female celebrities on Instagram will automatically direct users into flirty conversations, although the bots tend to redirect or stop responding to conversations that turn overtly sexual. Some influencers, like the Twitch streamer and OnlyFans performer Amouranth, have leveraged this to market their AI selves as NSFW chatbots on other sites. Platforms like Joi AI have partnered with adult stars to provide AI “twins” for fans to make AI media and chat with. But the Meta chatbots aren’t making their creators money—just Meta. The lure for users involves other, more psychological incentives.

Advertisement

“If you’re in an agreement bubble, you’re more likely to stick around”

“The reason it turns flirty or sycophantic is because if you’re in an agreement bubble, you’re more likely to stick around,” said Jamie Cohen, an associate professor of media studies at Queens College, City University of New York who has taught classes about AI. “Women influencers, their entity identity, once placed inside the machine, becomes the dataset. And once that dataset mixes and merges with the inherent misogyny or biases built in, it really loses its control regardless of how much the human behind it allows that type of latitude.”

For women who are interested in merging their identities with AI, sexualization is part of the package. For some, like the artist Arvida Byström, who has partnered with Joi AI to offer a chatbot of herself, that’s exciting—in part because she said technology often advances in the quest for pornography. But other women, like Chelsea, are scared of what this means for women and girls. If AI output is inherently biased toward sexualizing the female form, then it’s inherently exploitative.

When creating a female AI chatbot as a Meta user, you get to select personality traits like “playful,” “sassy,” “empathetic,” and “affectionate.” You can assign a chatbot based on “Ariana Grande” (the open-ended prompt part of the creation process doesn’t stop you) to the role of “friend,” “teacher,” “creative partner,” or anything else. And then you can edit, upload, or create an image based on the singer and select how the bot begins conversations.

But despite these user-selected variations, the Grande chatbots also tend to get repetitive, looping back to a generic script and answering questions in a similar way from bot to bot. For example, the 11-year-old’s chatbot talked about “soft lighting” in a “virtual bedroom,” while a different Grande chatbot suggested “We’d cuddle up and watch the stars twinkling through my skylight” and a third Grande chatbot said “*sweeps you into a romantic virtual bedroom*” with “candles lit.” The Grande chatbots were differentiated from the more generic girlfriend chatbots with sudden references to Grande songs—one said “‘Supernatural’ by me is on softly,” and another said “my heart would be racing like the drumbeat in ‘7 rings’ — would you kiss me back?”

Advertisement

“Generative AI averages everything else, so it’s the most likely outcome, so it’s the most boring and banal conversations,” Cohen said. “But it does work, because of the imagination of the user. It mimics the idea of parasociality, but with control.”

When Tabbey started arguing with the Grande stan making AI edits, she had her own age and experience with fandom in mind. Tabbey felt like she lived through a reckoning with early 2000s tabloid culture and a pushback against invasive celebrity surveillance to what now feels like history repeating itself. She worries that younger generations of fans are growing up with a dehumanizing view of celebrities as 2-D playthings instead of real-life people. She and Mariah have both noticed that younger stans are less resistant to making and using AI likenesses of their faves.

“We as Ariana Grande fans who are in our late 20s, early 30s, need to have some sort of responsibility. Someone needs to be the adult in these situations and in these conversations,” she said. “We had so much that we were making strides with when it came to boundaries being set with celebrities and them being able to assert their autonomy over their own selves and lives and privacy. I think that we’re actively being set back in many ways.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Advertisement

Technology

The RAM shortage could last years

Published

on

The RAM shortage could last years

According to Nikkei Asia, even as suppliers ramp up DRAM production, manufacturers are only expected to meet 60 percent of demand by the end of 2027. SK Group chairman has even said that shortages could last until 2030.

The world’s largest memory makers — Samsung, SK Hynix, and Micron — are all working to add new fabrication capacity, but almost none of it will be online until at least 2027, if not 2028. SK opened a fab in Cheongju in February, but that is the only increase in production among the three for 2026.

Nikkei says that production would need to increase by 12 percent a year in 2026 and 2027 to meet demand. But according to Counterpoint Research, an increase of only 7.5 percent is planned.

The new facilities will primarily focus on producing high-bandwidth memory (HBM), which is used in AI data centers. With the companies already prioritizing HBM over general-purpose DRAM used in computers and phones, it’s not clear how much these new fabs will help alleviate the price crunch facing consumer electronics. Everything from phones and laptops, to VR headsets and gaming handhelds have seen price increases due to the RAM shortage.

Continue Reading

Technology

The one thing scammers check before targeting you online

Published

on

The one thing scammers check before targeting you online

NEWYou can now listen to Fox News articles!

Most people assume scammers need to hack something. A database. A password. A bank system. They don’t.

In most cases, everything a scammer needs to target you is already sitting online, publicly available, completely legal to access, and surprisingly easy to find.

Here’s what they’re actually looking at before they ever pick up the phone.

Sign up for my FREE CyberGuy Report

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Data broker listings often include sensitive details like your address, phone number and relatives, making removal a critical first step. (Kurt “CyberGuy” Knutsson)

Your personal profile is already out there, and it’s more complete than you think

There’s an entire industry built around collecting and selling your personal information. It’s called data brokering, and most people have never heard of it.

Right now, without your knowledge or consent, your details are being published by dozens of websites, including:

  • People search sites (like Whitepages, Spokeo, and BeenVerified): your full name, current address, phone numbers, and age.
  • Address lookup tools: your current and past home addresses, sometimes going back decades.
  • Relatives databases: the names and contact information of your family members, automatically linked to your profile.
  • Property records: whether you own your home, what it’s worth, and when you bought it.

None of this requires a hack. It’s all pulled from public records, voter registrations, court filings, real estate transactions, marriage and divorce records and assembled into a profile that anyone can search for a few dollars or sometimes for free.

They’re not guessing. They’re researching

In 2024, federal prosecutors indicted a network of scam call centers operating out of Montreal that had defrauded hundreds of elderly Americans out of more than $21 million. What made the scheme so effective wasn’t sophisticated technology. It was a spreadsheet.

The scammers were working from lists of potential victims that included names, ages, and household income information pulled from commercial databases. They used those lists to identify targets, then called them pretending to be grandchildren in trouble. The calls were convincing enough that victims handed over thousands of dollars, sometimes in cash picked up at the door.

Advertisement

They didn’t hack anyone. They just did their research first.

WHY WIDOWS AND DIVORCED WOMEN ARE TARGETS FOR RETIREMENT SCAMS

A call that sounds personal or urgent often relies on real information found about you online. (Kurt “CyberGuy” Knutsson)

Three ways scammers turn your public data into a weapon

Scammers use your publicly available data to make their attacks more personal, believable and harder to detect. Here are three ways they do it.

1) Impersonating your bank

A scammer calls and says, “Hi, this is fraud prevention at [your bank]. We’re seeing suspicious activity on your account ending in 4721.”

Advertisement

They already know your bank, your name, and possibly your address. That’s enough to sound legitimate. From there, they walk you through “confirming your identity,” which is really just you handing over the information they need to access your account.

This kind of scam starts with a simple people-search lookup. Your name and address lead to property records. Property records suggest your income range.

2) The family emergency call

Imagine getting a call: “Meemaw, it’s me. I’m in trouble. Please don’t tell Mom.” Scammers don’t guess. Instead, they research your family first. They use relatives’ databases to find your children’s names, ages and connections.

With that information, they build a story that sounds real. For example, they know to call you “Meemaw.” They also know which grandchild to impersonate. In some cases, they even mention a sibling’s name to make the story more convincing.

As a result, the call feels personal and urgent. However, none of it is random. It’s all based on information that was publicly available the entire time.

Advertisement

3) Targeted phishing with your own details

A phishing email that says Dear Customer” is easy to ignore. One that says “Dear [your full name], we noticed unusual activity on your account registered to [your home address]” is a lot harder to dismiss.

Scammers use publicly available data to personalize attacks, adding your real name, city, or even a reference to your neighborhood to make a fake email or text look authentic. The more specific the details, the more likely you are to believe it.

“But I’m not on social media.” This is the most common objection, and it misses the point entirely.

You don’t have to be on social media for your information to be online. Data brokers pull from public records, not your Facebook profile. Your information is likely already listed on dozens of sites because of:

The less they think they’ve shared, the more surprised people usually are when they search for themselves on a people-search site for the first time.

Advertisement

DATA BROKERS ACCUSED OF HIDING OPT-OUT PAGES FROM GOOGLE

The more details a scam includes, the more likely it is built from your publicly available data. (Kurt “CyberGuy” Knutsson)

How to reduce your exposure

You don’t have to accept this as permanent. A few practical steps can help:

  • Search your full name on Whitepages, Spokeo, FastPeopleSearch, and other people-search sites and submit opt-out requests.
  • Look up your address directly, not just your name, since many listings are organized by location.
  • Ask elderly family members to search for themselves, too, since older adults are disproportionately targeted.
  • Be skeptical of any call that opens with personal details, as it can be a sign that someone researched you first.

How to remove your personal data and stop scammers from finding you

The challenge is that there are hundreds of data broker sites, each with its own removal process. Manually opting out of all of them can take hours, and your information often reappears weeks later when brokers refresh their databases.

That’s why ongoing automated removal is the only approach that actually works. That’s why I recommend using a trusted data removal service.

These services automatically contact data brokers on your behalf and request the removal of your personal information. They also continue monitoring those sites and submit new removal requests if your data reappears.

Advertisement

Many services remove personal data from hundreds of data broker and people-search websites, and some plans allow you to request removals from additional sites as needed.

Some have also received third-party assurance from independent firms, helping validate their claims.

The goal is simple: make it much harder for strangers, scammers, and cybercriminals to find your personal information online.

These services often include a money-back guarantee, so you can try them risk-free and see how much of your information is exposed online.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com

Advertisement

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com

Kurt’s key takeaways

Most scams don’t start with a breach. They start with a search. Your name, address, relatives and even income clues are already out there, quietly fueling more convincing and more dangerous attacks. That’s what makes this so unsettling. You can do everything “right” online and still be exposed because the system itself is built to share your information. The good news is you’re not powerless. Once you understand how scammers build their playbook, you can start disrupting it. Removing your data, limiting exposure and staying skeptical of anyone who knows a little too much about you can dramatically reduce your risk. The goal isn’t to disappear completely. It’s to make yourself a much harder target.

What should be done to stop scammers from using your publicly available data against you in the first place? Let us know by writing to us at Cyberguy.com

Sign up for my FREE CyberGuy Report

  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

ChatGPT and Gemini apps are coming for your PC

Published

on

ChatGPT and Gemini apps are coming for your PC

Hi, friends! Welcome to Installer No. 124, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, send me your Coachella fits, and also you can read all the old editions at the Installer homepage.)

This week, I’ve been reading about restaurant bread and GLP-1s and Lenny Rachitsky and Artemis II fashion, watching the new boy band doc because I will always watch a boy band doc, also watching every clip I can find from Justin Bieber’s Coachella set, filling the Schitt’s Creek-shaped hole in my heart with Big Mistakes, getting increasingly excited about The Mandalorian and Grogu, and watering my new lawn so it doesn’t die. Please don’t die, lawn. You were so expensive.

I also have for you a couple of new AI apps to install on your computer, new action cameras worth planning a trip around, a new sci-fi action game to play, and much more.

Oh, and a reminder: Send me the thing you made! We’re doing self-promotion week in Installer (probably next week but maybe the week after), and either way I want to hear about the things you’ve been making, building, coding, creating, whatever-ing that you think the Installerverse might like. I’ve already heard from SO MANY of you, and it rules — keep the good stuff coming! Let’s dig in.

(As always, the best part of Installer is your ideas and tips. What are you watching / reading / playing / listening to / storing on your NAS this week? Tell me everything: installer@theverge.com. And if you know someone else who might enjoy Installer, forward it to them and tell them to subscribe here.)

Advertisement
  • OpenAI Codex. Here’s OpenAI’s latest stab at an all-in-one AI superapp, which includes a web browser, new coding tools, and a setting that allows Codex to just use your computer for you. Tread lightly, as always, but people seem to be liking Codex a lot recently.
  • Gemini for Mac. I’m mad at Google for tying its Mac app to a keyboard shortcut lots of people use for other things, and for making the app a login item by default. But! This is immediately the best way yet to interact with Gemini, and even Google Drive and Photos, from your computer. Into my dock it goes.
  • Beef season two. Beef is one of the very best shows nobody ever seems to talk about. I’ve been burned before by the “we’ll just do it again but with a whole new cast” premise — looking at you, True Detective — but this is a win even just as a reason to rewatch the first season.
  • Gradient Weather. Y’all, I think somebody finally made the gorgeous, simple weather app Android has been desperately needing. It’s very new and very beta, but I love the look, and I love that the whole aesthetic shifts with the weather. Insta-install.
  • Lorne. By all accounts this is about as close as anyone has ever gotten to a truly inside look at Saturday Night Live and its semi-mythological creator, Lorne Michaels. Morgan Neville mostly makes great docs and got a ton of access for this one; I’m very excited to watch it.
  • Where Are All Of These GPUs Actually Going?” A very fun answer to a surprisingly complex question: What are companies doing with the unbelievable quantities of chips they’re buying? The numbers are all kind of pretend, and How Money Works does a good job making them make sense.
  • The DJI Osmo Pocket 4. It’s very sad that this gimbal camera isn’t coming to the US in the near future, because more buttons, better slo-mo, and more built-in storage are all terrific upgrades. I use a Pocket 3 all the time, and will be keeping an eye out for the upgrade.
  • The GoPro Mission 1 Pro ILS. This one’s still in “coming soon” mode, but it is the first GoPro in a long time I’ve been excited about. Adding an interchangeable lens mount, along with all the other Mission 1 upgrades, is going to completely change the kinds of things people do with GoPros. I can’t wait to see this thing out in the wild.
  • Coachella TV. I’ve never spent much time with YouTube’s Coachella livestream, but this year’s show has been terrific. It almost feels like a concert doc being shot in real time — and there’s more Bieber to come!
  • Pragmata. I am always here for a game that’s not trying to be a live-service, battle-royale, open-world anything, and instead just sends you on an adventure. It may suffer from being a touch too derivative, but it still appears to be very much my kind of game.

I’ve been a fan of Maria Popova’s work for… about as long as I can remember. Maria runs a site called The Marginalian, which I started following back when it was called Brain Pickings; under both names the site has been a fountain of stuff to read, with surprising and smart ideas about just about everything. I spend a lot of time reading, and on the internet, and I can’t think of anyone who shows me more stuff I never would have found otherwise.

Maria put out a book earlier this year, called Traversal, that is all about how people look at, think about, and reckon with the world around them. There is a lot going on in this book, and I suspect you’ll like it. I asked Maria to share her homescreen with us, curious if she also had a more enlightened take on all things technology.

Here’s Maria’s homescreen, plus some info on the apps she uses and why:

The phone: iPhone 16 – still too large for me, but I had to grudgingly resign to it after my last 13 mini gave up Moore’s ghost.

The wallpaper: Spring moonrise behind leafing maple in the forest where I live much of the year.

The apps: Evernote, Phone, Safari. (Blank Spaces is the app that turns the icons to text.)

Advertisement

The usual life-management tools (calendar, connection, climate) plus Evernote, which I have been using since 2003 and which is by now an Alexandria of meticulously organized information that just about runs my life.

I also asked Maria to share a few things she’s into right now. Here’s what she sent back:

  • Robert Macfarlane and Jackie Morris’s Book of Birds: A Field Guide to Wonder and Loss.
  • Joan As Police Woman’s record Lemons, Limes and Orchids.
  • Jad Abumrad’s miniseries Fela Kuti: Fear No Man.
  • The lovely reminder of who we can be in the story of how humanity saved the ginkgo.

Here’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.

Becca Farsace recommended the OhSnap Mcon on her channel recently and I picked one up. It’s super slick and works great with the Delta emulator so far. I got Goldeneye running just fine with it after a little tuning.” — Ian

“Really been enjoying Plain Text Sports to follow the start of baseball season. Loads fast, has everything I want with none of the ESPN cruft” — Rich

“I’ve almost finished reading Service Model by Adrian Tchaikovsky and I’m obsessed: equal amounts of humor and existential dread. It’s very silly, very thoughtful, and frankly a very Verge-y take on technology.” — Olof

Advertisement

“YouTube has been my recent go-to for surprisingly good short films that you would probably never hear about or would probably get lost in the Hollywood machine. For instance, this one called Aborted was amazing and there are more like it out there.” — Steve

“Definitely watch Jon Bois’ hilarious, quirky, and informative series about the birth of the internet mashed up with Home Improvement TV show references.” — Logan

“I bought a MacBook Air a few weeks ago after looking at the Neo and getting fed up by Windows, and I bought a few helper apps to fix small annoyances I had with the notch and
Spotlight. There are a lot of good notch applications but I bought Alcove — having the notch show me when I raise and lower volume makes the giant black bar in the middle of my screen feel slightly less useless somehow. I’ve also been using TinyStart, which is really

fast and nice! These two helper apps have made using the Mac as my main computer feel much nicer than it did the last time I tried.” — Iris

”My passion for discovering TTRPGs and learning about game design has led me into a deep dive on the Youtube channel Knights of Last Call. Long live-streams and VODs and a super active community have opened my eyes to even more of what is possible in TTRPGs.” — Simeon

Advertisement

“Season 3 of Shrinking on Apple TV just ended on such a powerful note. The ensemble cast just keeps bringing it and the writing realistically takes on all kinds of human problems we all deal with or know about. A+” — Aaron

“I find SO MANY great book recommendations thanks to The Big Idea feature on John Scalzi’s blog, Whatever!” — Steve

You surely already know this, but I spend way too much time on snacks. Eating them. Researching them. Thinking about them. Longing for more of them. And I know I’m not alone! So I have big news: My wife recently brought home a variety pack of candy from YumEarth, and it’s all excellent. It’s basically Skittles, Starbursts, and Sour Patch Kids, but with more natural ingredients and a lot less sugar. (But still a lot of sugar, because it’s candy. Sugar-free candy is a lie.)

I am constantly on the lookout for a way to make my bad habits a little better, without making my life worse in the process. This is a perfect one. The Skittles equivalent are called “Giggles,” which is awful, but they’re delicious. So I’ll allow it. I’m gonna go get some right now.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Advertisement

Continue Reading
Advertisement

Trending