Connect with us

Technology

Fandoms are cashing in on AI deepfakes

Published

on

Fandoms are cashing in on AI deepfakes

Madison Lawrence Tabbey was scrolling through X in late October when a post from a Wicked update account caught her attention. Ariana Grande, who stars in the movies as Glinda, had just liked a meme on Instagram about never wanting to see another AI-generated image again. Grande had also purportedly blocked a fan account that had made AI edits of her.

As Tabbey read through the mostly sympathetic replies, a very different message caught her eye. It was from a fellow Grande fan whose profile was mostly AI edits, showing Grande with different hairstyles and outfits. And, their reply said, they weren’t going to stop. Tabbey, a 33-year-old living in Nashville, Tennessee, couldn’t help but start arguing with them. “Oh so you were SERIOUS when you said you don’t care about poor communities not having water so that you can make AI pictures of ariana grande?” she shot back, referencing data centers draining resources and polluting cities like nearby Memphis. The account fired back at first, but amid a swarm of angry responses, it deactivated a few days later. It seemed like the owner wanted to argue and make people mad, but they might have taken things too far.

Grande is one of many celebrities and influencers who have openly rejected AI media exploiting their likenesses, but who continue to be prominently featured in it anyway, even among people who call themselves fans. As AI images and videos become ever simpler to produce, celebrities are facing down a mix of unsettled social norms and the incentives of an internet attention economy. And on “stan Twitter,” where pop culture accounts have grown into a lucrative fan-made media ecosystem, AI content has emerged as a growing genre, despite — or maybe because of — the outrage it provokes.

“Stan Twitter is very against AI just in general. So this goes against what people believe in, so then they’ll instantly get a comment, they’ll have the AI people retweet it, like it. So it’s just a very quick way to get money,” said Brandon, a 25-year-old who runs a verified fan account for Grande with close to 25,000 followers.

Brandon spoke on the condition that his account name and his last name be withheld, fearing retaliation from other people on stan Twitter. (Grande’s fans have been known to harass people; in 2019 the pop star told one critic under siege that she apologized on her fans’ behalf, but couldn’t stop them.) He tells The Verge he’s against most AI media, but he did ask ChatGPT to rank Grande’s top 10 songs that weren’t released as singles. He compiled the results into a thread that got over 1,000 likes. That seemed morally okay to him, as opposed to making AI pictures of Grande — commonly known as deepfakes — or Grande-inspired AI songs.

Advertisement

Grande’s position on the latter is clear. In a February 2024 interview, she called it “terrifying” that people were posting AI-generated imitations of her covering songs by other artists like Sabrina Carpenter and Dua Lipa. The rebuke hasn’t stopped them, though. Searching “ariana grande ai cover” on X still pulls up plenty of AI songs, although some have been removed by X in response to reports made by the original songs’ copyright owners.

Even the musician Grimes, who in 2023 encouraged fans to create AI songs based on her voice, said in October that the experience of having her likeness co-opted by AI “felt really weird and really uncomfortable.” She’s now calling for “international treaties” to regulate deepfakes.

“It’s just a very quick way to get money”

Grimes’ more recent comments follow the launch of an app that dramatically escalated AI media proliferation: OpenAI’s Sora video generator. Sora is built around a feature called “Cameos,” which lets anyone offer up their likeness for other users to play with. Many of the results were predictably offensive, and once they’re online, they’re nearly impossible to remove.

Grimes was reacting to videos of influencer and boxer Jake Paul, whose Cameo is available on Sora. Paul, who is an OpenAI investor, was the face of the launch. He said AI videos of him generated by Sora were viewed more than a billion times in the first week. Some of the viral ones portrayed Paul as gay, relying on homophobic stereotypes as the joke. The same thing happened when a self-identified homophobic British influencer offered his likeness to Sora, then again to the YouTuber IShowSpeed.

Advertisement

Paul capitalized on the trend, filming a Celsius brand endorsement with a purposefully flamboyant affect, while the other men threatened defamation suits and attempted to shut down their Sora Cameos.

Sora has since added more granular controls for Cameos, and it technically allows their owners to delete videos they don’t like. But Sora videos are quickly ripped and posted to other platforms, where OpenAI can’t remove them. When IShowSpeed attempted to delete AI depictions of him coming out, he encountered the problem most victims of nonconsensual media run into: Maybe you can get one video taken down, but by that time, more have already cropped up elsewhere. And as Paul’s fiancée said in a video objecting to the Sora 2 videos of him coming out, “It’s not funny. People believe—” (Paul cut off the video there).

Alongside Paul, just a few other popular YouTubers, like Justine Ezarik (better known as iJustine), have promoted their own deepfakes made with Sora. In Ezarik’s case, most of her content relates to unboxing and sharing new tech industry products. Shark Tank host Mark Cuban offered up his likeness on Sora, too, which shocked SocialProof Security CEO Rachel Tobac, who told The Verge that scammers have already been tricking people with AI-generated Shark Tank endorsements. “I mean, there’s been an explosion of impersonation,” Tobac said.

“There’s been an explosion of impersonation”

But after teasing the Sora updates, Paul, Ezarik, and Cuban had all stopped posting about it and their deepfakes by the end of the month. Jeremy Carrasco, a video producer whose Instagram explainers about how to spot AI videos have netted him nearly a quarter of a million followers this year, said that most influencers he talks to aren’t interested in creating their own deepfakes—they’re more worried that people could accuse them of faking their content or that their fans could be scammed.

Advertisement

Deepfakes have shifted from something mainly created on seedy forums at the turn of the decade into one of the most accessible technologies today. Still, they have yet to take hold as an acceptable mainstream way for fans to engage with their favorite stars. Instead, when they go viral, it’s mostly offensive content.

“The normalization of deepfakes is something no one was asking for. It’s something that OpenAI did because it made their thing more viral and social,” Carrasco said. “Once you open that door to being okay with people deepfaking you, even if it’s your friends deepfaking you, all of a sudden your likeness has just gotten fucked. You’re no longer in control of it and you can’t pull it back.”

Image: Cath Virginia / The Verge, Getty Images

The reasonable fears around having your likeness exploited in AI media have understandably made celebrities a bit jumpy. That recently led to a tense moment between Criminal Minds star Paget Brewster and one of her favorite fan accounts on X, run by a 27-year-old film student named Mariah. Over the weekend, Mariah posted a brightened screenshot of a scene in an episode from years ago, one where Brewster’s character was taking a nap. Brewster saw Mariah’s post and replied “Um, babe, this is AI generated and kinda creepy. Please don’t make fake images of me? I thought we were friends. I’d like to stay friends.”

When Mariah saw Brewster’s reply, she gasped out loud. By the time she responded, other Criminal Minds fans had chimed in to let Brewster know that it wasn’t an AI-generated image. The actress, who is 56 and recently asked another fan what a “parody account” is, publicly and profusely apologized to Mariah.

Advertisement

“I’m so sorry! I thought it was fake and it freaked me out,” she wrote. “I feel terrible I thought you made something in AI. I hope you’ll forgive me.” Mariah did. As someone in a creative field, she said she would never use AI. She’s been dismayed to see it emerge in fandom spaces, generating the kind of fanart and fan edits that used to be hand-drawn and arranged with care. Some celebrities have long been uncomfortable with things like erotic fanart and fanfiction or been subject to harassment or other boundary violations. But AI, even when it’s not overtly sexual, feels like it crosses a new line.

“But that pushback does give them more engagement and they almost don’t care. They almost want to do it more, because it’s causing people to be upset,” Mariah said.

“They almost want to do it more, because it’s causing people to be upset.”

AI content can appear on nearly any platform, but the stronger the incentive to farm engagement, the more heated the fights over it get. Since late 2024, X users who pay to be verified, like the owner of the Grande AI edits account, can earn money by getting engagement on their posts from other verified users. That makes it a particularly easy place for stan accounts to turn discourse into dollars.

“In the last couple years there’s been a massive uptick in ragebaiting in general just to farm engagement” on X, Tabbey said in a phone interview. “And I know there’s a big market for it, especially in fandoms, because we’re real people. We care about musicians and their art.”

Advertisement

Stans using AI or otherwise deceptively edited media to bait other stans into engagement on X also has the knock-on effect of potentially spreading disinformation and harming the reputations of their favorite artists. In late October, a Grande stan account with nearly 40,000 X followers that traffics in crude edits — their last nine posts have all been images of Grande with slain podcaster Charlie Kirk’s face superimposed over hers, which has become a popular AI meme format — posted images of Grande wearing a T-shirt with text that says “Treat your girl right.” “I wonder why these photos are kept unreleased..” they captioned their post. Another Grande stan quoted them and wrote “Oh girl we ALL know why,” referencing Grande’s controversial (alleged) history of dating men who are already in relationships. The post has 6 million views.

At first glance, nothing looks out of the ordinary. But zooming in on the images and reading the replies reveals that the T-shirt was edited to say “Treat your girl right.” It originally featured a simple smiley face design with no text. And upon close inspection, the letters in the edited version are oddly compressed, wavy, and appear at a slightly different resolution than the rest of the image—these are indicators, often called “artifacts” by AI researchers, that something was AI-generated.

“I probably should’ve deleted this tweet a while ago,” wrote Trace, the 18-year-old Grande stan behind the viral quote tweet (not the original edited images) in a DM. He wrote that he didn’t know whether the image was edited with AI or something else, but that it goes to show that AI “can influence people to believe things that are harmful or aren’t true about a celeb.”

AI using celebrity likenesses can also be weaponized more directly as a form of sexual harassment. Trace wrote that he’s seen “sinister” AI media of Grande floating around stan Twitter, like sexually explicit deepfakes and images that are meant to imitate semen on her face — which is something that X’s built-in AI service Grok was doing to women’s selfies to the tune of tens of millions of views over the summer, until one influencer started publicly seeking legal advice. Trace wrote that it “truly disturbs” him to see AI used in this context, and that he’s seen it done to Taylor Swift, Lady Gaga, Beyoncé, and many more celebrities. Some deepfake creators have even successfully monetized this kind of nonconsensual content, despite it provoking widespread outrage among the general public.

Back in January 2024, X disabled searches for “Taylor Swift” and “Taylor Swift AI” after a series of images portraying her likeness in sexually suggestive and violent scenarios went viral. It didn’t stop the spread of the images, which were also posted on other social media platforms, but some stans partook in a mass-reporting campaign to get the material removed. They linked up with feminists on X to do it, including a 28-year-old named Chelsea who helped direct group chats into action. X didn’t respond to a request for comment.

Advertisement

The viral Swift deepfakes even prompted federal legislative efforts around giving victims of nonconsensual deepfakes more tools to take them down—some of which culminated in the aptly named Take It Down Act, which requires platforms to quickly remove reported content. Some students who have deepfaked their underage classmates have even been arrested. But that’s not the norm, and critics of Take It Down have pointed out that it can facilitate censorship without necessarily helping victims.

“It’s like this weird sense of control”

For years, celebrity women have been on the front lines of this issue. Scarlett Johansson has been outspoken on it since 2018, when she referred to combating deepfakes as a “useless pursuit, legally.” Jenna Ortega deactivated her Twitter account in 2023 after she said she repeatedly encountered sexually explicit deepfakes created out of her childhood photos.

And since the Swift incident, Chelsea has only observed a greater normalization of AI and sexual violence against famous women.

“I’ve seen so many people have the excuse, ‘Well if they didn’t want it, they shouldn’t have become famous,’” she said in a phone interview. “It’s like this weird sense of control that they’re able to do this, even if the person wouldn’t want them to, they know they can. It’s this power-hungry thing.”

Advertisement
An image of Taylor Swift, copied and pasted with green triangles across it.

Image: Cath Virginia / The Verge, Getty Images

One way that fans can puppeteer a version of their idol is with a customizable AI chatbot. Lots of platforms provide the ability to create your own AI character, some of the biggest being Instagram and Facebook. In 2023, Meta tried out an AI chatbot collaboration with celebrities like Kendall Jenner and Snoop Dogg, but it didn’t catch on. In 2024, it introduced user-generated chatbots. The feature is tucked away deep in the DMs function, but millions of messages have already been traded with user-designed characters like “Fortune Teller” and “Rich but strict parents.” Meta’s rules technically don’t allow users to create characters based on living people without their permission, but users can still do it as long as they designate them as “parody” accounts. Users have been getting away with making and conversing with chatbots based on Grande, Swift, the YouTuber MrBeast, Donald Trump, Elon Musk, Jesus (religious figures aren’t allowed either), and everyone in between since the beginning. Searching “Ariana Grande” pulls up 10 results for chatbots clearly imitating her right away.

Most of the accounts that created the chatbots didn’t respond to requests for comment. But one did. She identified herself as an 11-year-old girl in India who is about to turn 12 and loves Grande and singing. Photos on the account appeared to corroborate this. Children under 13 aren’t supposed to be able to make Instagram accounts at all, and children under 18 aren’t supposed to be able to make AI chatbots. At least one of the other Grande chatbot creators appeared to be a young person in India based on photos and locations tagged from their account. Another was created by a page for a “kid influencer” with fewer than 1,000 followers. In addition to Grande, his page had created 185 other AI chatbots depicting celebrities like Wendy Williams, Keke Palmer, Will Smith, and bizarrely, Bill Cosby. The adults listed as managing the account didn’t respond to requests for comment, either.

The 11-year-old girl’s Grande chatbot opened the conversation by offering an interior design makeover. The Grande bot then asked if the vibe should be “sultry, feminine, or sleek?” When asked what “sultry vibes” means, the bot answered “Think velvet, lace, and soft lighting — like my music videos. Does that turn you on?”

Meta removed the accounts belonging to the 11-year-old and the “kid influencer” after The Verge reached out for comment on them, removing their AI chatbot creations in the process, too.

Many of the user-generated AI chatbots imitating female celebrities on Instagram will automatically direct users into flirty conversations, although the bots tend to redirect or stop responding to conversations that turn overtly sexual. Some influencers, like the Twitch streamer and OnlyFans performer Amouranth, have leveraged this to market their AI selves as NSFW chatbots on other sites. Platforms like Joi AI have partnered with adult stars to provide AI “twins” for fans to make AI media and chat with. But the Meta chatbots aren’t making their creators money—just Meta. The lure for users involves other, more psychological incentives.

Advertisement

“If you’re in an agreement bubble, you’re more likely to stick around”

“The reason it turns flirty or sycophantic is because if you’re in an agreement bubble, you’re more likely to stick around,” said Jamie Cohen, an associate professor of media studies at Queens College, City University of New York who has taught classes about AI. “Women influencers, their entity identity, once placed inside the machine, becomes the dataset. And once that dataset mixes and merges with the inherent misogyny or biases built in, it really loses its control regardless of how much the human behind it allows that type of latitude.”

For women who are interested in merging their identities with AI, sexualization is part of the package. For some, like the artist Arvida Byström, who has partnered with Joi AI to offer a chatbot of herself, that’s exciting—in part because she said technology often advances in the quest for pornography. But other women, like Chelsea, are scared of what this means for women and girls. If AI output is inherently biased toward sexualizing the female form, then it’s inherently exploitative.

When creating a female AI chatbot as a Meta user, you get to select personality traits like “playful,” “sassy,” “empathetic,” and “affectionate.” You can assign a chatbot based on “Ariana Grande” (the open-ended prompt part of the creation process doesn’t stop you) to the role of “friend,” “teacher,” “creative partner,” or anything else. And then you can edit, upload, or create an image based on the singer and select how the bot begins conversations.

But despite these user-selected variations, the Grande chatbots also tend to get repetitive, looping back to a generic script and answering questions in a similar way from bot to bot. For example, the 11-year-old’s chatbot talked about “soft lighting” in a “virtual bedroom,” while a different Grande chatbot suggested “We’d cuddle up and watch the stars twinkling through my skylight” and a third Grande chatbot said “*sweeps you into a romantic virtual bedroom*” with “candles lit.” The Grande chatbots were differentiated from the more generic girlfriend chatbots with sudden references to Grande songs—one said “‘Supernatural’ by me is on softly,” and another said “my heart would be racing like the drumbeat in ‘7 rings’ — would you kiss me back?”

Advertisement

“Generative AI averages everything else, so it’s the most likely outcome, so it’s the most boring and banal conversations,” Cohen said. “But it does work, because of the imagination of the user. It mimics the idea of parasociality, but with control.”

When Tabbey started arguing with the Grande stan making AI edits, she had her own age and experience with fandom in mind. Tabbey felt like she lived through a reckoning with early 2000s tabloid culture and a pushback against invasive celebrity surveillance to what now feels like history repeating itself. She worries that younger generations of fans are growing up with a dehumanizing view of celebrities as 2-D playthings instead of real-life people. She and Mariah have both noticed that younger stans are less resistant to making and using AI likenesses of their faves.

“We as Ariana Grande fans who are in our late 20s, early 30s, need to have some sort of responsibility. Someone needs to be the adult in these situations and in these conversations,” she said. “We had so much that we were making strides with when it came to boundaries being set with celebrities and them being able to assert their autonomy over their own selves and lives and privacy. I think that we’re actively being set back in many ways.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Advertisement

Technology

How Last Samurai Standing adds kinetic action to the Battle Royale formula

Published

on

How Last Samurai Standing adds kinetic action to the Battle Royale formula

Last Samurai Standing begins with a familiar premise. Desperate samurai dispossessed by the restoration of the emperor enter into a deadly game for a life-changing cash prize — all for the entertainment of anonymous elites. Unlike its inspirations Battle Royale and Squid Game, however, Last Samurai Standing’s violence is chaotic, fast-paced, and kinetic, though it hides a careful choreography that makes the series a more electric proposition than its predecessors.

Viewers have Junichi Okada to thank for that. As well as starring in and producing Last Samurai Standing, he serves as the series’ action planner. Many will be familiar with the results of an action planner’s work — sometimes called an action director, elsewhere a “coordinator,” and even “choreographer” — though perhaps not what the role entails. In the case of Last Samurai Standing, it’s a role that touches on nearly every aspect of the production, from the story to the action itself.

“I was involved from the script stage, thinking about what kind of action we wanted and how we would present it in the context of this story,” Okada tells The Verge. “If the director [Michihito Fujii] said, ‘I want to shoot this kind of battle scene,’ I would then think through the content and concept, design the scene, and ultimately translate that into script pages.”

The close relationship between the writer and director extends to other departments, too. Though an action planner’s role starts with managing fight scenes and stunt performers, they also liaise with camera, wardrobe, makeup, and even editorial departments to ensure fight scenes cohere with the rest of the production.

Image: Netflix

Advertisement

It’s a role which might appear a natural progression for Okada, who is certified to teach Kali and Jeet Kune Do — a martial art conceived by Bruce Lee — and holds multiple black belts in jiujitsu. Though the roots of his progression into action planning can be traced back further, to 1995 when he became the youngest member of J-pop group V6.

“Dance experience connects directly to creating action,” he says. “[In both] rhythm and control of the body are extremely important.” Joining V6 at the age of 15, that experience has made Okada conscious of how he moves in relation to a camera during choreography, how he is seen within the structure of a shot, and, critical to action planning, how to navigate all of that safely from a young age.

That J-pop stardom also offered avenues into acting, initially in roles you might expect for a young pop star: comic heartthrobs and sitcom sons. But he was steadily able to broaden his output. A starring turn in Hirokazu Kore-eda’s Hana followed, as did voice acting in Studio Ghibli’s Tales From Earthsea and From Up on Poppy Hill. A more telling departure was a starring role in 2007’s SP, in which he played a rookie in a police bodyguard unit, for which he trained for several years under shootfighting instructor Yorinaga Nakamura.

“What I care about is whether audiences feel that ‘this man really lives here as a samurai.’”

In the years since, Okada has cemented himself as one of Japan’s most recognizable actors, hopping between action starring roles in The Fable to sweeping period epics like Sekigahara. Those two genres converge in his Last Samurai Standing role of Shujiro, a former Shogunate samurai now reduced to poverty, working through his PTSD and reckoning with his bloodthirsty past in the game. These days, it’s less of a concern that the character butts up against his past idol image, he suggests. “What I care about is whether audiences feel that ‘this man really lives here as a samurai.’”

Advertisement

For Okada’s work on Last Samurai Standing, as both producer and action planner, that involved lacing high-octane but believable action with the respect for history and character studies of the period dramas he loves. “Rather than being 100 percent faithful to historical accuracy,” he adds, “my goal was to focus on entertainment and story, while letting the ‘DNA’ and beauty of Japanese period drama gently float up in the background.”

A focus on what he defines as “‘dō’ — movement,” pure entertainment that “never lets the audience get bored” punctuated — with “‘ma,’” the active emptiness that connects those frenetic moments. Both can be conversations, even if one uses words and another communicates dialogue through sword blows. This is most apparent when Shujiro faces his former comrade Sakura (Yasushi Fuchikami) inside a claustrophobic bank vault that serves as a charnel house for the game’s less fortunate contestants.

“The whole battle is divided into three sequences,” Okada says. The first starts with a moment of almost perfect stillness, a deep breath, before the two launch into battle. “A fight where pride and mutual respect collide,” he says, “and where the speed of the techniques reaches a level that really surprises the audience.” It’s all captured in one, zooming take with fast, tightly choreographed action reminiscent of Donnie Yen and Wu Jing in Kill Zone.

So intense is their duel that both shatter multiple swords. The next phase sees them lash out in a more desperate and brutal manner with whatever weapons they find. Finally, having fought to a weary stalemate, the fight becomes, Okada concludes, “a kind of duel where their stubbornness and will are fully exposed” as they hack at each other with shattered blades and spear fragments.

A still image from the Netflix series Last Samurai Standing.

Image: Netflix

It’s a rhythm that many fights in Last Samurai Standing follow, driven by a string of physical and emotional considerations that form the basis of an action planner’s tool kit: how and why someone fights based on who they are and their environment. Here it is two former samurai in an elegant and terrifyingly fast-paced duel. Elsewhere we see skill matched against brutality, or inexperience against expertise.

Advertisement

“I define a clear concept for each sequence,” Okada says, before he opens those concepts up to the broader team. From there, he might add notes, but in Last Samurai Standing, action is a collaborative affair. “We keep refining,” he says. “It’s a back-and-forth process of shaping the sequence using both the ideas the team brings and the choreography I create myself.”

There is a third factor which Okada believes is the series’ most defining. “If we get to continue the story,” he says, “I’d love to explore how much more we can lean into ‘sei’ — stillness, and bring in even more of a classical period drama feel.”

As much of a triumph of action as Last Samurai Standing is, its quietest moments are the ones that stay with you. The charged looks between Shujiro and Iroha (Kaya Kiyohara) or their shuddering fright when confronted with specters of their past. Most of all, Shujiro watching his young ward, Futaba Katsuki (Yumia Fujisaki), dance before a waterlogged torii as mist hovers. These pauses are what elevate and invigorate the breathless action above spectacle.

The pauses are also emblematic of the balance that Last Samurai Standing strikes between its period setting and pushing the boundaries of action, all to inject new excitement into the genre. “Japan is a country that values tradition and everything it has built up over time. That’s why moments where you try to update things are always difficult,” Okada says. “But right now, we’re in the middle of that transformation.”

That is an evolution that Okada hopes to support through his work, both in front of and behind the camera. If he can create avenues for new generations of talent to carry Japanese media to a broader audience and his team to achieve greater success on a global stage, “that would make me very happy,” he says. “I want to keep doing whatever I can to help make that possible.”

Advertisement

The first season of Last Samurai Standing is streaming on Netflix now, and a second season was just confirmed.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Advertisement
Continue Reading

Technology

Free up iPhone storage by deleting large attachments

Published

on

Free up iPhone storage by deleting large attachments

NEWYou can now listen to Fox News articles!

If your iPhone keeps warning you about low storage, your Messages app may be part of the problem. Photos, videos and documents saved inside your text threads can stack up fast. The good news is that you can clear those big files without erasing entire conversations.

Below, you will find simple steps that work on the latest iOS 26.1. These steps help you clean up storage while keeping your messages right where you want them.

If you haven’t updated to iOS 26.1, go to Settings > General > Software Update to install the latest version.

Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.      

Advertisement

‘CLOUD STORAGE FULL’ SCAM STEALS YOUR PHOTOS AND MONEY

An iPhone displays a low-storage alert as large photos, videos and documents saved in Messages fill device space, prompting users to remove files without deleting entire conversations. (Cyberguy.com)

Why clearing attachments helps your iPhone run better

Removing large attachments gives you quick breathing room on your iPhone. It can free up gigabytes in seconds, especially if you text lots of photos or videos. Clearing old files also keeps your message threads tidy and helps your device run more smoothly by reducing the amount of storage your system needs to manage. The best part is that you can clean up everything without losing a single conversation.

How to delete attachments but keep your conversations on iPhone 

These quick steps help you clear large files from Messages while keeping every conversation intact.

  • Launch the Messages app on your iPhone
  • Open the conversation thread that holds the attachments you want to delete.
  • Tap on the name of the contact(s) in the text thread.

To the right of Info, click on Photos or Documents; you may need to swipe over other tabs to see these. Photos will also contain videos and GIFs, while documents will contain Word documents, PDFs and other types of files.

  • Hold your finger and long-press on a photo, video or document until a menu appears.
  • Tap Delete to remove that single file.

Then confirm Delete when asked.

How to delete multiple files on your iPhone at once

To clear out several attachments at once, follow these quick steps on your iPhone.

Advertisement

Deleting attachments in Messages quickly frees space without losing your conversations. (Sean Gallup/Getty Images)

  • Go back to the Photos or Documents tab.
  • Tap Edit.
  • Click Select documents or Select Photos 
  • Tap on the photos or documents that you want to remove. You will see a blue checkmark appear in the bottom-right corner.
  • Tap the trash icon in the bottom right corner.

Confirm you want to delete the selected attachments by clicking Delete Photos.

These steps work almost the same way on an iPad. After you finish, you will often see an instant boost in available storage.

How to review large attachments in settings and delete them 

If you want to clear the biggest files on your device, you can check them from your iPhone’s storage screen and delete them:

  • Open Settings
  • Tap General
  • Choose iPhone Storage
  • Tap Messages
  • Click Review Large Attachments to see photos, videos and attachments taking up storage in Messages.
  • Click Edit.
  • Select items to delete by clicking the circle next to the attachment you want to delete. A blue checkmark will appear.

Then, tap the trash can icon in the upper right to delete it.

APPLE RELEASES IOS 26.1 WITH MAJOR SECURITY IMPROVEMENTS AND NEW FEATURES FOR IPHONE USERS

This method gives you a quick overview of what takes up the most space and lets you delete it quickly.

Advertisement

IPhone users can clear large photos, videos and files from Messages using built-in storage tools, helping free space, keep conversations intact and improve device performance. (Cyberguy.com)

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

Freeing up storage doesn’t have to be confusing. A few quick taps can remove bulky files and keep your conversations intact. With these simple steps, your iPhone stays organized, runs smoothly and is ready for more photos, videos and apps.

What is the one type of attachment that takes up the most space on your iPhone? Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter 

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading

Technology

The FCC’s foreign drone ban is here

Published

on

The FCC’s foreign drone ban is here

The Federal Communications Commission has banned new drones made in foreign countries from being imported into the US unless the Department of Defense or the Department of Homeland Security recommends them. Monday’s action added drones to the FCC’s Covered List, qualifying foreign-made drones and drone parts, like those from DJI, as communications equipment representing “unacceptable risks to the national security of the United States and to the safety and security of U.S. persons.”

DJI is “disappointed” by today’s action, Adam Welsh, DJI’s head of global policy, says in a statement. “While DJI was not singled out, no information has been released regarding what information was used by the Executive Branch in reaching its determination.” Welsh adds that DJI “remains committed to the U.S. market” and noted that existing products can continue operation as usual. Other items on the FCC’s list include Kaspersky anti-virus software (added in 2024) and telecommunications equipment from Huawei and ZTE (added in 2021).

The FCC says it received a National Security Determination on December 21st from an interagency body saying that “uncrewed aircraft systems” (UAS) and critical UAS components produced in a foreign country could “enable persistent surveillance, data exfiltration, and destructive operations over U.S. territory” and that “U.S. cybersecurity and critical‑infrastructure guidance has repeatedly highlighted how foreign‑manufactured UAS can be used to harvest sensitive data, used to enable remote unauthorized access, or disabled at will via software updates.”

If you already own a drone made outside the US, you will still be able to use it, according to the FCC’s fact sheet. Drones or drone components can be removed from the Covered List if the DoD or DHS “makes a specific determination to the FCC” that it does not pose unacceptable risks.

“Unmanned aircraft systems (UAS), also known as drones, offer the potential to enhance public safety as well as cement America’s leadership in global innovation,” FCC chairman Brendan Carr says.

Advertisement
Continue Reading

Trending