It’s been more than four years since Donald Trump first moved to expel TikTok from the US — and now, just days before a second Trump presidency begins, it just might happen.
Technology
How to Run Your Own ChatGPT-Like LLM for Free (and in Private)
The power of large language models (LLMs) such as ChatGPT, generally made possible by cloud computing, is obvious, but have you ever thought about running an AI chatbot on your own laptop or desktop? Depending on how modern your system is, you can likely run LLMs on your own hardware. But why would you want to?
Well, maybe you want to fine-tune a tool for your own data. Perhaps you want to keep your AI conversations private and offline. You may just want to see what AI models can do without the companies running cloud servers shutting down any conversation topics they deem unacceptable. With a ChatGPT-like LLM on your own hardware, all of these scenarios are possible.
And hardware is less of a hurdle than you might think. The latest LLMs are optimized to work with Nvidia graphics cards and with Macs using Apple M-series processors—even low-powered Raspberry Pi systems. And as new AI-focused hardware comes to market, like the integrated NPU of Intel’s “Meteor Lake” processors or AMD’s Ryzen AI, locally run chatbots will be more accessible than ever before.
Thanks to platforms like Hugging Face and communities like Reddit’s LocalLlaMA, the software models behind sensational tools like ChatGPT now have open-source equivalents—in fact, more than 200,000 different models are available at this writing. Plus, thanks to tools like Oobabooga’s Text Generation WebUI, you can access them in your browser using clean, simple interfaces similar to ChatGPT, Bing Chat, and Google Bard.
The software models behind sensational tools like ChatGPT now have open-source equivalents—in fact, more than 200,000 different models are available.
So, in short: Locally run AI tools are freely available, and anyone can use them. However, none of them is ready-made for non-technical users, and the category is new enough that you won’t find many easy-to-digest guides or instructions on how to download and run your own LLM. It’s also important to remember that a local LLM won’t be nearly as fast as a cloud-server platform because its resources are limited to your system alone.
Nevertheless, we’re here to help the curious with a step-by-step guide to setting up your own ChatGPT alternative on your own PC. Our guide uses a Windows machine, but the tools listed here are generally available for Mac and Linux systems as well, though some extra steps may be involved when using different operating systems.
Some Warnings About Running LLMs Locally
First, however, a few caveats—scratch that, a lot of caveats. As we said, these models are free, made available by the open-source community. They rely on a lot of other software, which is usually also free and open-source. That means everything is maintained by a hodgepodge of solo programmers and teams of volunteers, along with a few massive companies like Facebook and Microsoft. The point is that you’ll encounter a lot of moving parts, and if this is your first time working with open-source software, don’t expect it to be as simple as downloading an app on your phone. Instead, it’s more like installing a bunch of software before you can even think about downloading the final app you want—which then still may not work. And no matter how thorough and user-friendly we try to make this guide, you may run into obstacles that we can’t address in a single article.
Also, finding answers can be a real pain. The online communities devoted to these topics are usually helpful in solving problems. Often, someone’s solved the problem you’re encountering in a conversation you can find online with a little searching. But where is that conversation? It might be on Reddit, in an FAQ, on a GitHub page, in a user forum on HuggingFace, or somewhere else entirely.
AI is quicksand. Everything moves whip-fast, and the environment undergoes massive shifts on a constant basis.
It’s worth repeating that open-source AI is moving fast. Every day new models are released, and the tools used to interact with them change almost as often, as do the underlying training methods and data, and all the software undergirding that. As a topic to write about or to dive into, AI is quicksand. Everything moves whip-fast, and the environment undergoes massive shifts on a constant basis. So much of the software discussed here may not last long before newer and better LLMs and clients are released.
Bottom line: Proceed at your own risk. There’s no Geek Squad to call for help with open-source software; it’s not all professionally maintained; and you’ll find no handy manual to read or customer service department to turn to—just a bunch of loosely organized online communities.
Finally, once you get it all running, these AI models have varying degrees of polish, but they all carry the same warnings: Don’t trust what they say at face value, because it’s often wrong. Never look to an AI chatbot to help make your health or financial decisions. The same goes for writing your school essays or your website articles. Also, if the AI says something offensive, try not to take it personally. It’s not a person passing judgment or spewing questionable opinions; it’s a statistical word generator made to spit out mostly legible sentences. If any of this sounds too scary or tedious, this may not be a project for you.
Select Your Hardware
Before you begin, you’ll need to know a few things about the machine on which you want to run an LLM. Is it a Windows PC, a Mac, or a Linux box? This guide, again, will focus on Windows, but most of the resources referenced offer additional options and instructions for other operating systems.
You also need to know whether your system has a discrete GPU or relies on its CPU’s integrated graphics. Plenty of open-source LLMs can run solely on your CPU and system memory, but most are made to leverage the processing power of a dedicated graphics chip and its extra video RAM. Gaming laptops, desktops, and workstations are better suited to these applications, since they have the powerful graphics hardware these models often rely on.
Gaming laptops and mobile workstations offer the best hardware for running LLMs at home. (Credit: Molly Flores)
In our case, we’re using a Lenovo Legion Pro 7i Gen 8 gaming notebook, which combines a potent Intel Core i9-13900HX CPU, 32GB of system RAM, and a powerful Nvidia GeForce RTX 4080 mobile GPU with 12GB of dedicated VRAM.
If you’re on a Mac or Linux system, are CPU-dependent, or are using AMD instead of Intel hardware, be aware that while the general steps in this guide are correct, you may need extra steps and additional or different software to install. And the performance you see could be markedly different from what we discuss here.
Set Up Your Environment and Required Dependencies
To start, you must download some necessary software: Microsoft Visual Studio 2019. Any updated version of Visual Studio 2019 will work (though not newer annualized releases), but we recommend getting the latest version directly from Microsoft.
(Credit: Brian Westover/Microsoft)
Personal users will be fine to skip the Enterprise and Professional versions and use just the BuildTools version of the software.
Find the latest version of Visual Studio 2019 and download the BuildTools version (Credit: Brian Westover/Microsoft)
After choosing that, be sure to select “Desktop Development with C++.” This step is essential in order for other pieces of software to work properly.
Be sure to select “Desktop development with C++.” (Credit: Brian Westover/Microsoft)
Begin your download and kick back: Depending on your internet connection, it could take several minutes before the software is ready to launch.
(Credit: Brian Westover/Microsoft)
Download Oobabooga’s Text Generation WebUI Installer
Next, you need to download the Text Generation WebUI tool from Oobabooga. (Yes, it’s a silly name, but the GitHub project makes an easy-to-install and easy-to-use interface for AI stuff, so don’t get hung up on the moniker.)
(Credit: Brian Westover/Oobabooga)
To download the tool, you can either navigate through the GitHub page or go directly to the collection of one-click installers Oobabooga has made available. We’ve installed the Windows version, but this is also where you’ll find installers for Linux and macOS. Download the zip file shown below.
(Credit: Brian Westover/Oobabooga)
Create a new file folder someplace on your PC that you’ll remember and name it AI_Tools or something similar. Do not use any spaces in the folder name, since that will mess up some of the automated download and install processes of the installer.
(Credit: Brian Westover/Microsoft)
Then, extract the contents of the zip file you just downloaded into your new AI_Tools folder.
Run the Text Generation WebUI Installer
Once the zip file has been extracted to your new folder, look through the contents. You should see several files, including one called start_windows.bat. Double-click it to begin installation.
Depending on your system settings, you might get a warning about Windows Defender or another security tool blocking this action, because it’s not from a recognized software vendor. (We haven’t experienced or seen anything reported online to indicate that there’s any problem with these files, but we’ll repeat that you do this at your own risk.) If you wish to proceed, select “More info” to confirm whether you want to run start_windows.bat. Click “Run Anyway” to continue the installation.
(Credit: Brian Westover/Microsoft)
Now, the installer will open up a command prompt (CMD) and begin installing the dozens of software pieces necessary to run the Text Generation WebUI tool. If you’re unfamiliar with the command-line interface, just sit back and watch.
(Credit: Brian Westover/Microsoft)
First, you’ll see a lot of text scroll by, followed by simple progress bars made up of hashtag or pound symbols, and then a text prompt will appear. It will ask you what your GPU is, giving you a chance to indicate whether you’re using Nvidia, AMD, or Apple M series silicon or just a CPU alone. You should already have figured this out before downloading anything. In our case, we select A, because our laptop has an Nvidia GPU.
(Credit: Brian Westover/Microsoft)
Once you’ve answered the question, the installer will handle the rest. You’ll see plenty of text scroll by, followed first by simple text progress bars and then by more graphically pleasing pink and green progress bars as the installer downloads and sets up everything it needs.
(Credit: Brian Westover/Microsoft)
At the end of this process (which may take up to an hour), you’ll be greeted by a warning message surrounded by asterisks. This warning will tell you that you haven’t downloaded any large language model yet. That’s good news! It means that Text Generation WebUI is just about done installing.
(Credit: Brian Westover/Microsoft)
At this point you’ll see some text in green that reads “Info: Loading the extension gallery.” Your installation is complete, but don’t close the command window yet.
(Credit: Brian Westover/Microsoft)
Copy and Paste the Local Address for WebUI
Immediately below the green text, you’ll see another line that says “Running on local URL: http://127.0.01:7860.” Just click that URL text, and it will open your web browser, serving up the Text Generation WebUI—your interface for all things LLM.
(Credit: Brian Westover/Microsoft)
You can save this URL somewhere or bookmark it in your browser. Even though Text Generation WebUI is accessed through your browser, it runs locally, so it’ll work even if your Wi-Fi is turned off. Everything in this web interface is local, and the data generated should be private to you and your machine.
(Credit: Brian Westover/Oobabooga)
Close and Reopen WebUI
Once you’ve successfully accessed the WebUI to confirm it’s installed correctly, go ahead and close both the browser and your command window.
In your AI_Tools folder, open up the same start_windows batch file that we ran to install everything. It will reopen the CMD window but, instead of going through that whole installation process, will load up a small bit of text including the green text from before telling you that the extension gallery is loaded. That means the WebUI is ready to open again in your browser.
(Credit: Brian Westover/Oobabooga)
Use the same local URL you copied or bookmarked earlier, and you’ll be greeted once again by the WebUI interface. This is how you will open the tool in the future, leaving the CMD window open in the background.
Select and Download an LLM
Now that you have the WebUI installed and running, it’s time to find a model to load. As we said, you’ll find thousands of free LLMs you can download and use with WebUI, and the process of installing one is pretty straightforward.
If you want a curated list of the most recommended models, you can check out a community like Reddit’s /r/LocalLlaMA, which includes a community wiki page that lists several dozen models. It also includes information about what different models are built for, as well as data about which models are supported by different hardware. (Some LLMs specialize in coding tasks, while others are built for natural text chat.)
These lists will all end up sending you to Hugging Face, which has become a repository of LLMs and resources. If you came here from Reddit, you were probably directed straight to a model card, which is a dedicated information page about a specific downloadable model. These cards provide general information (like the datasets and training techniques that were used), a list of files to download, and a community page where people can leave feedback as well as request help and bug fixes.
At the top of each model card is a big, bold model name. In our case, we used the the WizardLM 7B Uncensored model made by Eric Hartford. He uses the screen name ehartford, so the model’s listed location is “ehartford/WizardLM-7B-Uncensored,” exactly how it’s listed at the top of the model card.
Next to the title is a little copy icon. Click it, and it will save the properly formatted model name to your clipboard.
(Credit: Brian Westover/Hugging Face)
Back in WebUI, go to the model tab and enter that model name into the field labeled “Download custom model or LoRA.” Paste in the model name, hit Download, and the software will start downloading the necessary files from Hugging Face.
(Credit: Brian Westover/Oobabooga)
If successful, you’ll see an orange progress bar pop up in the WebUI window and several progress bars will appear in the command window you left open in the background.
(Credit: Brian Westover/Oobabooga)
(Credit: Brian Westover/Oobabooga)
Once it’s finished (again, be patient), the WebUI progress bar will disappear and it will simply say “Done!” instead.
Load Your Model and Settings in WebUI
Once you’ve got a model downloaded, you need to load it up in WebUI. To do this, select it from the drop-down menu at the upper left of the model tab. (If you have multiple models downloaded, this is where you choose one to use.)
Before you can use the model, you need to allocate some system or graphics memory (or both) to running it. While you can tweak and fine-tune nearly anything you want in these models, including memory allocation, I’ve found that setting it at roughly two-thirds of both GPU and CPU memory works best. That leaves enough unused memory for your other PC functions while still giving the LLM enough memory to track and hold a longer conversation.
(Credit: Brian Westover/Oobabooga)
Once you’ve allocated memory, hit the Save Settings button to save your choice, and it will default to that memory allocation every time. If you ever want to change it, you can simply reset it and press Save Settings again.
Enjoy Your LLM!
With your model loaded up and ready to go, it’s time to start chatting with your ChatGPT alternative. Navigate within WebUI to the Text Generation tab. Here you’ll see the actual text interface for chatting with the AI. Enter text into the box, hit Enter to send it, and wait for the bot to respond.
(Credit: Brian Westover/Oobabooga)
Here, we’ll say again, is where you’ll experience a little disappointment: Unless you’re using a super-duper workstation with multiple high-end GPUs and massive amounts of memory, your local LLM won’t be anywhere near as quick as ChatGPT or Google Bard. The bot will spit out fragments of words (called tokens) one at a time, with a noticeable delay between each.
However, with a little patience, you can have full conversations with the model you’ve downloaded. You can ask it for information, play chat-based games, even give it one or more personalities. Plus, you can use the LLM with the assurance that your conversations and data are private, which gives peace of mind.
You’ll encounter a ton of content and concepts to explore while starting with local LLMs. As you use WebUI and different models more, you’ll learn more about how they work. If you don’t know your text from your tokens, or your GPTQ from a LoRA, these are ideal places to start immersing yourself in the world of machine learning.
Like What You’re Reading?
Sign up for Tips & Tricks newsletter for expert advice to get the most out of your technology.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Technology
Nintendo omits original Donkey Kong Country Returns team from the remaster’s credits
Donkey Kong Country Returns HD, the just-launched port of the 2010 Wii game, doesn’t include individual members of the original Retro Studios development team in the credits, as reported by GameSpot. Since the discovery, however, Nintendo has commented on the omission, giving a statement to Eurogamer.
“We believe in giving proper credit for anyone involved in making or contributing to a game’s creation, and value the contributions that all staff make during the development process,” the statement reads, which is sourced only to Nintendo and not to a specific individual. The game’s credits reveal that the port was done by Forever Entertainment.
Technology
Best ways to give your old iPhone a second life
Before tossing out your old iPhone, consider it a treasure trove of potential waiting to be unlocked.
Your seemingly outdated device isn’t just electronic waste. It’s a versatile gadget ready for an exciting second life. From transforming into a smart home hub to becoming a dedicated digital companion, an old iPhone can be repurposed in countless creative ways that breathe new life into technology you may not have considered.
Here are some of the best ways to use your old iPhone.
I’M GIVING AWAY THE LATEST & GREATEST AIRPODS PRO 2
1. Turn it into a ‘dumb phone’
Smartphones are incredibly powerful, but they can also be overwhelming. The constant notifications and social media updates can make it hard to focus or enjoy the present moment. If you’re finding that your iPhone is more of a time-waster than a tool for productivity, why not transform it into a “dumb phone”? A “dumb phone” is a basic mobile device that focuses on essential communication functions like calling and texting, while minimizing digital distractions through limited internet access and app capabilities, helping you reduce screen time and stay more present.
Steps to disable apps and notifications
Disable notifications:
- Open Settings
- Tap on Notifications
- Select each app and toggle off Allow Notifications
Delete unwanted apps:
- Press and hold the app icon on the home screen
- Tap Remove App
Factory reset (Optional):
- Back up your data, if needed
- Go to Settings > General > Transfer or Reset iPhone > Erase All Content and Settings
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
2. Give it to your kids
Eventually, you may want to give your child their own smartphone. While a new phone can be expensive, handing down your old iPhone is a great way to introduce them to technology while also being mindful of your budget.
By using Family Sharing and parental controls, you can carefully monitor what apps and content your child accesses. Plus, it’s an excellent way to bring them into the Apple ecosystem.
Steps to set up Family Sharing and Parental Controls
Set up Family Sharing:
- Open Settings and tap on your name at the top.
- Select Family Sharing and tap Add Family Member to invite your child.
- If your child does not have an Apple ID, select Create an Account for a Child and follow the prompts to set up their account.
- If they already have an Apple ID, tap Invite People and choose how you want to send the invitation (AirDrop, Messages or Mail).
Enable Parental Controls:
- Open Settings and scroll down to tap on Screen Time
- Under the Family section, tap on your child’s name
- If Screen Time is not already enabled, tap Turn On Screen Time
- Follow the prompts to set it up as your child’s device.
- Tap on Content & Privacy Restrictions
- If prompted, enter your Screen Time passcode (you will need to create one if you haven’t already)
- Toggle on Content & Privacy Restrictions
- You can now customize settings such as app limits, content restrictions and downtime settings, as needed.
- For app limits, tap App Limits, then select categories or individual apps to set time limits.
- To restrict explicit content or purchases, go to the respective sections under Content & Privacy Restrictions
This process will help you manage your child’s device usage effectively while ensuring they have access to appropriate content.
24 MOST AMAZING GIFTS FOR KIDS
3. Repurpose it as a webcam
In recent years, Apple’s Continuity Camera feature has made it easy to use an iPhone as a webcam for your Mac or Apple TV. While newer models work well for this purpose, older iPhones can still serve as excellent webcams, especially for online meetings and video calls.
Instead of purchasing an external webcam, your old iPhone can deliver superior video quality. iOS 18 even allows older iPhones to work as dedicated continuity cameras for Apple TV, which is ideal for FaceTime or Zoom calls with family and friends.
Steps to use your iPhone as a webcam
- Download webcam apps: Consider apps like EpocCam or DroidCam from the App Store
- Connect your iPhone: Follow the app’s instructions to connect your iPhone to your computer via USB or Wi-Fi
- Select your iPhone in video settings: In your video conferencing app (like Zoom or FaceTime), select your iPhone as the camera source
Instead of purchasing an external webcam, your old iPhone can deliver superior video quality.
4. Make it a dedicated music player
Why use your main iPhone for music when you can repurpose your old one as a dedicated MP3 player? With your old iPhone set up as a music player, you can enjoy your favorite tunes or podcasts without the distractions of text messages or social media notifications. It’s a perfect solution for workouts, long drives or when you just want to zone out with music.
5. Use it as a remote or smart home controller
If you own an Apple TV, you already know that your iPhone can function as a remote control. However, keeping your main phone tied up with the remote can be inconvenient. By making your old iPhone a dedicated Apple TV remote, you can still enjoy controlling your TV without using your primary phone.
Steps to set up your old iPhone as a remote
Set up Apple TV remote feature:
- Ensure both devices are connected to the same Wi-Fi network
- Open the Control Center on the old iPhone (swipe down from the upper-right corner)
- Tap on the Apple TV remote icon and follow the prompts to connect
Manage smart home devices:
- Download smart home apps like Apple Home or those specific to your devices (e.g., Philips Hue)
- Follow the app instructions to add and control devices
Your old iPhone can even be a hub for all your smart home devices, from adjusting lights and thermostats to checking security cameras.
10 SMART HOME DEVICES WE LOVE
6. Save it for gaming
Smartphones have transformed mobile gaming, and your old iPhone could be an ideal portable gaming console. Many classic and modern games run smoothly on older iPhone models, and with subscription services like Apple Arcade, you can access a huge library of high-quality games. This is a fun, low-cost way to enjoy mobile gaming without draining your main iPhone’s battery life.
7. Convert it into an e-reader
For book lovers, using an old iPhone as a dedicated e-reader is a great option. You can install apps like Kindle or Apple Books from the App Store. The iPhone’s display is perfect for reading books and graphic novels, and since you’re not using your main iPhone, there are no distractions like notifications to interrupt your reading.
You can disable all apps and notifications on the old device, making it a peaceful reading experience. Plus, you can still connect your AirPods to listen to audiobooks while on the go.
SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES
Kurt’s key takeaways
Just because you’ve upgraded to a new iPhone doesn’t mean your old one has to be discarded. There are countless ways to repurpose it, from boosting your productivity to controlling your smart home. This way, you can extend its lifespan and maximize the value of your purchase.
What other devices would you like tips on breathing new life into? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Ask Kurt a question or let us know what stories you’d like us to cover.
Follow Kurt on his social channels:
Answers to the most asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
6 TikTok creators on where they’ll go if the app is banned
President Joe Biden signed legislation last April that officially began the countdown that would force TikTok’s parent company, ByteDance, to divest from the US business. But even afterward, the atmosphere on the video powerhouse was mostly nonchalant, with a handful of stray jokes about “this app disappearing” slotted between the usual fare.
In the last week, though, the vibe has shifted — my favorite creators are posting links to their other social accounts, audiences are making highlight reels of the most viral moments on the app, and they’re saying goodbye to their “Chinese spy” and threatening to hand over their data to the Chinese government. A Chinese-owned app Xiaohongshu, known as RedNote, topped the App Store this week, driven by a wave of “TikTok refugees” trying to recreate the experience of the platform. It’s feeling a bit like a fever dream last day of school.
For many creatives online, this wouldn’t be the first time they’ve had to migrate to new spaces: reach, engagement, and visibility are constantly shifting even on the largest and most stable platforms. But the possibility that a social media site of this size would disappear — or slowly break down until it’s nonfunctional — is a new threat. For small creators especially, TikTok is like playing the lottery: you don’t need thousands of followers for your video to get big, and this unpredictability incentivized the average person to upload content.
It’s still unclear what will happen to TikTok after January 19th. I asked content creators what their game plan is. (Responses have been edited and condensed for clarity.)
Noelle Johansen, @astraeagoods (89K followers)
“At the peak, I was making approximately 70 percent of my sales through TikTok from December 2020 to January 2022. Now, it drives at most, 10 percent of my sales,” says Noelle Johansen, who sells slogan sweatshirts, accessories, stickers, and other products.
“At my peak with TikTok, I was able to reach so many customers with ease. Instagram and Twitter have always been a shot in the dark as to whether the content will be seen, but TikTok was very consistent in showing my followers and potential new customers my videos,” Johansen told The Verge in an email. “I’ve also made great friends from the artist community on TikTok, and it’s difficult to translate that community to other social media. Most apps function a lot differently than TikTok, and many people don’t have the bandwidth to keep up with all of the new socials and building platforms there.”
Going forward, Johansen says they’ll focus on X and Instagram for sales while working to grow an audience on Bluesky and Threads.
Kay Poyer, @ladymisskay_ (704K followers)
“I think the ease of use on TikTok opened an avenue for a lot of would-be creators,” Kay Poyer, a popular creator making humor and commentary content, says. “Right now we’re seeing a cleaving point, where many will choose to stop or be forced to adapt back to older platforms (which tend to be more difficult to build followings on and monetize).”
As for her own plans, Poyer says she’ll stay where the engagement is if TikTok becomes unavailable — smaller platforms like Bluesky or Neptune aren’t yet impactful enough.
“I’m seeing a big spike in subscribers to my Substack, The Quiet Part, as well as followers flooding to my Instagram and Twitter,” Poyer told The Verge. “Personally I have chosen to make my podcast, Meat Bus, the flagship of my content. We’re launching our video episodes sometime next month on YouTube.”
Bethany Brookshire, @beebrookshire (18K followers)
Bethany Brookshire, a science journalist and author, has been sharing videos about human anatomy on TikTok, Bluesky, Instagram, and YouTube. Across platforms, Brookshire has observed differences in audiences — YouTube, for example, “is not a place [to] build an audience,” she says, citing negative comments on her work.
“Sometimes I feel like the only ethical way to produce any content is to write it out in artisanal chalk on an organically sourced vegan stone”
“I find people on TikTok comment and engage a lot more, and most importantly, their comments are often touching or funny,” she says. “When I was doing pelvic anatomy, a lot of people with uteruses wrote in to tell me they felt seen, that they had a specific condition, and they even bonded with each other in the comments.”
Brookshire told The Verge in an email that sharing content anywhere can at times feel fraught. Between Nazi content on Substack, right-wing ass-kissing at Meta, and the national security concerns of TikTok, it doesn’t feel like any platform is perfectly ideal.
“Sometimes I feel like the only ethical way to produce any content is to write it out in artisanal chalk on an organically sourced vegan stone, which I then try to show to a single person with their consent before gently tossing it into the ocean to complete its circle of life,” Brookshire says. “But if I want to inform, and I want to educate, I need to be in the places people go.”
Woodstock Farm Sanctuary, @woodstocksanctuary (117K followers)
The Woodstock Farm Sanctuary in upstate New York uses TikTok to share information with new audiences — the group’s Instagram following is mostly people who are already animal rights activists, vegans, or sanctuary supporters.
“TikTok has allowed us to reach people who don’t even know what animal sanctuaries are,” social media coordinator Riki Higgins told The Verge in an email. “While we still primarily fundraise via Meta platforms, we seem to make the biggest education and advocacy impact when we post on TikTok.”
With a small social media and marketing team of two, Woodstock Farm Sanctuary (like other small businesses and organizations) must be strategic in how it uses its efforts. YouTube content can be more labor-intensive, Higgins says, and Instagram Reels is missing key features like 2x video speed and the ability to pause videos.
“TikTok users really, really don’t like Reels. They view it as the platform where jokes, trends, etc., go to die, where outdated content gets recycled, and especially younger users see it as an app only older audiences use,” Higgins says.
The sanctuary says it will meet audiences wherever they migrate in the case that TikTok becomes inaccessible.
Anna Rangos, @honeywhippedfeta (15K followers)
Anna Rangos, who works in social media and makes tech and cultural commentary videos, is no stranger to having to pick up and leave a social media platform for somewhere else. As a retired sex worker, she saw firsthand how fragile a social media following could be.
“You could wake up one day to find your accounts deactivated, and restoring them? Forget it. Good luck getting any kind of service from Meta,” Rangos said in an email. Having an account deleted means lost income and hours of trying to rebuild a following. “Over my time in the industry, I went through three or four Instagram accounts, constantly trying to recapture my following.”
Sex workers and sex education creators regularly deal with their content being removed, censored, or entire accounts deleted. Rangos says that though the community on TikTok is more welcoming, she’s working to stake out her own space through a website and a newsletter. She also plans to stay active on YouTube, Pinterest, and Bluesky.
“I don’t plan on using Meta products much, given [Mark] Zuckerberg’s recent announcements regarding fact-checking,” she wrote in an email.
Amanda Chavira, @lost.birds.beads (10K followers)
“I have found so much joy and community on TikTok mostly through Native TikTok,” says Amanda Chavira, an Indigenous beader who built an audience through tutorials and cultural content. “It’s sad to see TikTok go.”
Chavira says she plans to reupload some of her content to YouTube Shorts to see how her videos perform there but otherwise will be waiting to see if another viable video platform comes along. Chavira won’t be pivoting to Meta: she says she plans to delete her accounts on Threads, Instagram, and Facebook.
“I’d been considering leaving my Meta accounts for a long time,” she said in an email. “Facebook felt like a terrible place through election cycles, and then the pandemic, [and] then every other post I was seeing was a suggested ad or clickbait article. For Instagram, I’ve really been struggling to reach my target audience and didn’t have the time available to post all the time to try to increase engagement.” Her final straw was Meta’s decision to end the fact-checking program and Zuckerberg’s “pandering to the Trump administration,” she says.
-
Technology1 week ago
Meta is highlighting a splintering global approach to online speech
-
Science6 days ago
Metro will offer free rides in L.A. through Sunday due to fires
-
Technology1 week ago
Las Vegas police release ChatGPT logs from the suspect in the Cybertruck explosion
-
News1 week ago
Photos: Pacific Palisades Wildfire Engulfs Homes in an L.A. Neighborhood
-
Education1 week ago
Four Fraternity Members Charged After a Pledge Is Set on Fire
-
Politics1 week ago
Trump trolls Canada again, shares map with country as part of US: 'Oh Canada!'
-
Technology6 days ago
Amazon Prime will shut down its clothing try-on program
-
News1 week ago
Mapping the Damage From the Palisades Fire