This is Hot Pod, The Verge’s newsletter about podcasting and the audio industry. Sign up here for more.
Technology
Spotify’s podcast future isn’t very original
/cdn.vox-cdn.com/uploads/chorus_asset/file/23951394/STK088_VRG_Illo_N_Barclay_1_spotify.jpg)
When Spotify announced yesterday that it would lay off 200 employees from its podcast unit and combine Gimlet and Parcast into a single operation, it came as a shock to outside observers. But former and current podcast employees at Spotify have seen the writing on the wall for some time.
“We definitely have expected for several months now that they’d be axing people since the vibe at Gimlet had been very much one of walking on eggshells for months now,” one former Gimlet employee who was a part of yesterday’s layoffs told Hot Pod. “Zero joy. [The layoffs] were more just a matter of when. The fact that it was yesterday, that was the surprise.”
“Zero joy. [The layoffs] were more just a matter of when.”
It’s been more than a year since Spotify first eliminated its namesake podcast production unit. Last fall, Spotify laid off dozens of Gimlet and Parcast workers and pulled 11 original shows from production. It began this year by axing 600 jobs companywide (including a number of ad and business jobs under Podsights and Chartable). High-profile executives such as content chief Dawn Ostroff (who steered Spotify’s podcast operations) have left. Prominent names, including Barack and Michelle Obama’s Higher Ground Productions, Brené Brown, and Esther Perel, have exited deals with the platform. Jemele Hill hasn’t left yet but is weighing other options.
As Bloomberg reported last week, neither Parcast nor Gimlet had received annual budgets, so they hadn’t been able to greenlight new shows or approve travel expenses. It was only this week that we found out Spotify’s reasoning for this: both Gimlet and Parcast will be combined to form a new Spotify Originals studio focused on original productions, which will include producing shows like Stolen, The Journal, Science Vs, Heavyweight, Serial Killers, and Conspiracy Theories.
Another Gimlet employee who was laid off noted that production staff — producers, reporters, and engineers — seem to be most heavily hit by the job cuts.
Both nonfiction and fiction shows were impacted. Spotify spokesperson Grey Munford confirmed that Case 63, which dropped its first season last fall, will be continuing. The show is produced by Gimlet along with Julianne Moore and Oscar Issac’s production companies, FortySixty and Mad Gene Media. Much of the Gimlet fiction team behind Case 63, the chart-topping fiction podcast starring Moore and Issac, is now under Spotify’s head of development, Liz Gateley, according to Munford.
As far as what Spotify wants the remaining chunk of Gimlet and Parcast to be doing, it appears to be along the lines of not getting in the way as it embraces creators and third-party deals. The company spelled out clearly that the next phase of its podcast strategy was to focus on creators and users, including the Spotify for Podcasters — the company’s ad and monetization platform.
“We know that creators have embraced the global audience on our platform but want improved discovery to help them grow their audience. We also know that they appreciate our tools and creator support programs but want more optionality and flexibility in terms of monetization. Fortunately, Spotify is not a company that ever sits still. Given these learnings and our leadership position, we recently embarked on the next phase of our podcast strategy, which is focused on delivering even more value for creators (and users!),” wrote Sahar Elhabashi, Spotify’s head of podcast business.
“They’re very different styles of production and development.”
The merging of Gimlet and Parcast seems to be an unnatural one, according to a number of former employees. Parcast, which Spotify acquired in 2019, focuses largely on true-crime podcasts, with shows like Criminal Passion and Criminal Couples. Gimlet is known for its lineup of audio journalism series and interview podcasts, as well as scripted audio dramas. Gimlet won its first Pulitzer Prize in audio reporting earlier this year for Stolen: Surviving St. Michael’s by investigative journalist Connie Walker. A second season of Stolen has been greenlit.
“I’m not sure what folding Parcast and Gimlet together in one team means. They’re very different styles of production and development, so they need different kinds of support in terms of marketing, PR, development, and other skill sets,” said one former Gimlet employee, who left prior to this week’s layoffs.
Gimlet blew up largely due to its original shows, which helped define the podcast boom. Series like Reply All, The Nod, Heavyweight, and StartUp helped push the bar on what audio storytelling could be, and both advertisers and investors lined up to get involved. But under the leadership of Spotify, both Gimlet and Parcast struggled to find direction. Reply All came to an inglorious end just over a year ago, and Gimlet under Spotify hasn’t produced another equivalent hit.
The blame is at least partly due to Spotify’s inability to fully understand what it was buying for a combined total of roughly $300 million. Unifying Parcast and Gimlet was a good example of that.
“Our shows and content are very different,” said one Gimlet worker who was laid off yesterday. “The fact that Spotify is merging them so clumsily only further illustrates that they never really understood or appreciated either of us fully.”
The hasty merger and axing of original programming echo similar tactics from the world of streaming video, such as Warner Bros. Discovery’s decision to combine HBO Max and Discovery Plus’ offerings into one streaming platform or Paramount’s decision to merge Showtime (which generates premium scripted series like Yellowjackets) with Paramount Plus, which is home to shows from CBS, BET, and TV Land — as well as live sports.
Such decisions reflect the reality of today’s cash-strapped streaming environment. Much like how Netflix would once go on buying sprees at Cannes and now makes reality shows like Too Hot to Handle, Spotify is moving away from pricey originals and embracing amateur podcasters and creator partnerships (not to mention its highest-value celebrity audio deals, such as that with Joe Rogan). In both cases, companies are trimming their original programming in favor of content that is cheaper to produce and generates more eyeballs and downloads.
Less prestigious content won’t make a difference to advertisers, says Max Willens, a senior analyst at Insider Intelligence. “I would say that advertisers will welcome this decision in the sense that it may give them more inventory to advertise against, possibly at a more attractive price. The longform, highly produced content that Gimlet made its name creating was costlier and took longer to produce, and often commanded premium ad prices, which advertisers sometimes chafed at.”
But for those who work in the audio industry, Spotify’s hasty exit from the world of podcasting and original audio journalism aligns with the behavior they’ve grown to expect from the tech company.
“[The individuals laid off] are some of the most talented, experienced producers in the entire industry,” said a Gimlet staffer who left prior to this week’s layoffs. “It’s disappointing that Spotify never understood that — and how to harness that creativity and experience.”
Audiobooks and podcasts may become a haven in the event of a SAG strike
SAG-AFTRA overwhelmingly voted in support of a strike if they don’t reach a deal with the studios, union leadership announced on Monday night. Although SAG-AFTRA is known traditionally as Hollywood’s actor union, its 160,000-strong membership includes DJs, news anchors, voiceover artists — as well as podcast hosts and audiobook narrators.
The looming SAG-AFTRA strike is with the Alliance of Motion Picture and Television Producers (AMPTP) and would only impact contracts bargained with them. Productions covered by SAG’s TV and theatrical contracts would be considered off-limits.
“Only productions that are covered by the TV/Theatrical Codified Basic Agreement and Television Agreement would be struck in […in the event of a strike]. Scripted dramatic live action entertainment production that is covered by the SAG-AFTRA TV/Theatrical Contracts would be considered struck work,” wrote SAG-AFTRA’s chief communications officer Pamela Greenwalt in an email.
In other words, most podcast and audiobook contracts under SAG-AFTRA would not be considered “struck” work. This is in contrast to the ongoing WGA strike, where writing on scripted, fiction podcasts covered by WGA isn’t kosher and striking members are not allowed to work on non-union projects.
“So while work by any member (celebrity or otherwise) under SAG-AFTRA’s Audiobook Contracts would NOT be covered by a TV/Theatrical strike, all members will honor that action in the areas of work that are impacted should a strike need to be called,” clarified Greenwalt.
Which means that for performers looking to work during a Hollywood strike, the audio world may become their go-to destination. Celebrity audio dramas have certainly become in vogue lately, with the likes of Demi Moore, Chris Pine, Rami Malek, Matthew McConaughey, and others contributing their voice talents to fiction podcasts. Audible has showcased a number of celebrity-narrated audiobooks by Meryl Streep, Tom Hanks, Nicole Kidman, Thandiwe Newton, and others.
It’s still uncertain whether SAG-AFTRA will even call a strike. The union is scheduled to start contract negotiations with AMPTP on June 7th. In the event that they’re unable to reach a deal with the studios, SAG-AFTRA can then take steps to go on strike.

Technology
How to Run Your Own ChatGPT-Like LLM for Free (and in Private)

The power of large language models (LLMs) such as ChatGPT, generally made possible by cloud computing, is obvious, but have you ever thought about running an AI chatbot on your own laptop or desktop? Depending on how modern your system is, you can likely run LLMs on your own hardware. But why would you want to?
Well, maybe you want to fine-tune a tool for your own data. Perhaps you want to keep your AI conversations private and offline. You may just want to see what AI models can do without the companies running cloud servers shutting down any conversation topics they deem unacceptable. With a ChatGPT-like LLM on your own hardware, all of these scenarios are possible.
And hardware is less of a hurdle than you might think. The latest LLMs are optimized to work with Nvidia graphics cards and with Macs using Apple M-series processors—even low-powered Raspberry Pi systems. And as new AI-focused hardware comes to market, like the integrated NPU of Intel’s “Meteor Lake” processors or AMD’s Ryzen AI, locally run chatbots will be more accessible than ever before.
Thanks to platforms like Hugging Face and communities like Reddit’s LocalLlaMA, the software models behind sensational tools like ChatGPT now have open-source equivalents—in fact, more than 200,000 different models are available at this writing. Plus, thanks to tools like Oobabooga’s Text Generation WebUI, you can access them in your browser using clean, simple interfaces similar to ChatGPT, Bing Chat, and Google Bard.
The software models behind sensational tools like ChatGPT now have open-source equivalents—in fact, more than 200,000 different models are available.
So, in short: Locally run AI tools are freely available, and anyone can use them. However, none of them is ready-made for non-technical users, and the category is new enough that you won’t find many easy-to-digest guides or instructions on how to download and run your own LLM. It’s also important to remember that a local LLM won’t be nearly as fast as a cloud-server platform because its resources are limited to your system alone.
Nevertheless, we’re here to help the curious with a step-by-step guide to setting up your own ChatGPT alternative on your own PC. Our guide uses a Windows machine, but the tools listed here are generally available for Mac and Linux systems as well, though some extra steps may be involved when using different operating systems.
Some Warnings About Running LLMs Locally
First, however, a few caveats—scratch that, a lot of caveats. As we said, these models are free, made available by the open-source community. They rely on a lot of other software, which is usually also free and open-source. That means everything is maintained by a hodgepodge of solo programmers and teams of volunteers, along with a few massive companies like Facebook and Microsoft. The point is that you’ll encounter a lot of moving parts, and if this is your first time working with open-source software, don’t expect it to be as simple as downloading an app on your phone. Instead, it’s more like installing a bunch of software before you can even think about downloading the final app you want—which then still may not work. And no matter how thorough and user-friendly we try to make this guide, you may run into obstacles that we can’t address in a single article.
Also, finding answers can be a real pain. The online communities devoted to these topics are usually helpful in solving problems. Often, someone’s solved the problem you’re encountering in a conversation you can find online with a little searching. But where is that conversation? It might be on Reddit, in an FAQ, on a GitHub page, in a user forum on HuggingFace, or somewhere else entirely.
AI is quicksand. Everything moves whip-fast, and the environment undergoes massive shifts on a constant basis.
It’s worth repeating that open-source AI is moving fast. Every day new models are released, and the tools used to interact with them change almost as often, as do the underlying training methods and data, and all the software undergirding that. As a topic to write about or to dive into, AI is quicksand. Everything moves whip-fast, and the environment undergoes massive shifts on a constant basis. So much of the software discussed here may not last long before newer and better LLMs and clients are released.
Bottom line: Proceed at your own risk. There’s no Geek Squad to call for help with open-source software; it’s not all professionally maintained; and you’ll find no handy manual to read or customer service department to turn to—just a bunch of loosely organized online communities.
Finally, once you get it all running, these AI models have varying degrees of polish, but they all carry the same warnings: Don’t trust what they say at face value, because it’s often wrong. Never look to an AI chatbot to help make your health or financial decisions. The same goes for writing your school essays or your website articles. Also, if the AI says something offensive, try not to take it personally. It’s not a person passing judgment or spewing questionable opinions; it’s a statistical word generator made to spit out mostly legible sentences. If any of this sounds too scary or tedious, this may not be a project for you.
Select Your Hardware
Before you begin, you’ll need to know a few things about the machine on which you want to run an LLM. Is it a Windows PC, a Mac, or a Linux box? This guide, again, will focus on Windows, but most of the resources referenced offer additional options and instructions for other operating systems.
You also need to know whether your system has a discrete GPU or relies on its CPU’s integrated graphics. Plenty of open-source LLMs can run solely on your CPU and system memory, but most are made to leverage the processing power of a dedicated graphics chip and its extra video RAM. Gaming laptops, desktops, and workstations are better suited to these applications, since they have the powerful graphics hardware these models often rely on.
Gaming laptops and mobile workstations offer the best hardware for running LLMs at home. (Credit: Molly Flores)
In our case, we’re using a Lenovo Legion Pro 7i Gen 8 gaming notebook, which combines a potent Intel Core i9-13900HX CPU, 32GB of system RAM, and a powerful Nvidia GeForce RTX 4080 mobile GPU with 12GB of dedicated VRAM.
If you’re on a Mac or Linux system, are CPU-dependent, or are using AMD instead of Intel hardware, be aware that while the general steps in this guide are correct, you may need extra steps and additional or different software to install. And the performance you see could be markedly different from what we discuss here.
Set Up Your Environment and Required Dependencies
To start, you must download some necessary software: Microsoft Visual Studio 2019. Any updated version of Visual Studio 2019 will work (though not newer annualized releases), but we recommend getting the latest version directly from Microsoft.
(Credit: Brian Westover/Microsoft)
Personal users will be fine to skip the Enterprise and Professional versions and use just the BuildTools version of the software.
Find the latest version of Visual Studio 2019 and download the BuildTools version (Credit: Brian Westover/Microsoft)
After choosing that, be sure to select “Desktop Development with C++.” This step is essential in order for other pieces of software to work properly.
Be sure to select “Desktop development with C++.” (Credit: Brian Westover/Microsoft)
Begin your download and kick back: Depending on your internet connection, it could take several minutes before the software is ready to launch.
(Credit: Brian Westover/Microsoft)
Download Oobabooga’s Text Generation WebUI Installer
Next, you need to download the Text Generation WebUI tool from Oobabooga. (Yes, it’s a silly name, but the GitHub project makes an easy-to-install and easy-to-use interface for AI stuff, so don’t get hung up on the moniker.)
(Credit: Brian Westover/Oobabooga)
To download the tool, you can either navigate through the GitHub page or go directly to the collection of one-click installers Oobabooga has made available. We’ve installed the Windows version, but this is also where you’ll find installers for Linux and macOS. Download the zip file shown below.
(Credit: Brian Westover/Oobabooga)
Create a new file folder someplace on your PC that you’ll remember and name it AI_Tools or something similar. Do not use any spaces in the folder name, since that will mess up some of the automated download and install processes of the installer.
(Credit: Brian Westover/Microsoft)
Then, extract the contents of the zip file you just downloaded into your new AI_Tools folder.
Run the Text Generation WebUI Installer
Once the zip file has been extracted to your new folder, look through the contents. You should see several files, including one called start_windows.bat. Double-click it to begin installation.
Depending on your system settings, you might get a warning about Windows Defender or another security tool blocking this action, because it’s not from a recognized software vendor. (We haven’t experienced or seen anything reported online to indicate that there’s any problem with these files, but we’ll repeat that you do this at your own risk.) If you wish to proceed, select “More info” to confirm whether you want to run start_windows.bat. Click “Run Anyway” to continue the installation.
(Credit: Brian Westover/Microsoft)
Now, the installer will open up a command prompt (CMD) and begin installing the dozens of software pieces necessary to run the Text Generation WebUI tool. If you’re unfamiliar with the command-line interface, just sit back and watch.
(Credit: Brian Westover/Microsoft)
First, you’ll see a lot of text scroll by, followed by simple progress bars made up of hashtag or pound symbols, and then a text prompt will appear. It will ask you what your GPU is, giving you a chance to indicate whether you’re using Nvidia, AMD, or Apple M series silicon or just a CPU alone. You should already have figured this out before downloading anything. In our case, we select A, because our laptop has an Nvidia GPU.
(Credit: Brian Westover/Microsoft)
Once you’ve answered the question, the installer will handle the rest. You’ll see plenty of text scroll by, followed first by simple text progress bars and then by more graphically pleasing pink and green progress bars as the installer downloads and sets up everything it needs.
(Credit: Brian Westover/Microsoft)
At the end of this process (which may take up to an hour), you’ll be greeted by a warning message surrounded by asterisks. This warning will tell you that you haven’t downloaded any large language model yet. That’s good news! It means that Text Generation WebUI is just about done installing.
(Credit: Brian Westover/Microsoft)
At this point you’ll see some text in green that reads “Info: Loading the extension gallery.” Your installation is complete, but don’t close the command window yet.
(Credit: Brian Westover/Microsoft)
Copy and Paste the Local Address for WebUI
Immediately below the green text, you’ll see another line that says “Running on local URL: http://127.0.01:7860.” Just click that URL text, and it will open your web browser, serving up the Text Generation WebUI—your interface for all things LLM.
(Credit: Brian Westover/Microsoft)
You can save this URL somewhere or bookmark it in your browser. Even though Text Generation WebUI is accessed through your browser, it runs locally, so it’ll work even if your Wi-Fi is turned off. Everything in this web interface is local, and the data generated should be private to you and your machine.
(Credit: Brian Westover/Oobabooga)
Close and Reopen WebUI
Once you’ve successfully accessed the WebUI to confirm it’s installed correctly, go ahead and close both the browser and your command window.
In your AI_Tools folder, open up the same start_windows batch file that we ran to install everything. It will reopen the CMD window but, instead of going through that whole installation process, will load up a small bit of text including the green text from before telling you that the extension gallery is loaded. That means the WebUI is ready to open again in your browser.
(Credit: Brian Westover/Oobabooga)
Use the same local URL you copied or bookmarked earlier, and you’ll be greeted once again by the WebUI interface. This is how you will open the tool in the future, leaving the CMD window open in the background.
Select and Download an LLM
Now that you have the WebUI installed and running, it’s time to find a model to load. As we said, you’ll find thousands of free LLMs you can download and use with WebUI, and the process of installing one is pretty straightforward.
If you want a curated list of the most recommended models, you can check out a community like Reddit’s /r/LocalLlaMA, which includes a community wiki page that lists several dozen models. It also includes information about what different models are built for, as well as data about which models are supported by different hardware. (Some LLMs specialize in coding tasks, while others are built for natural text chat.)
These lists will all end up sending you to Hugging Face, which has become a repository of LLMs and resources. If you came here from Reddit, you were probably directed straight to a model card, which is a dedicated information page about a specific downloadable model. These cards provide general information (like the datasets and training techniques that were used), a list of files to download, and a community page where people can leave feedback as well as request help and bug fixes.
At the top of each model card is a big, bold model name. In our case, we used the the WizardLM 7B Uncensored model made by Eric Hartford. He uses the screen name ehartford, so the model’s listed location is “ehartford/WizardLM-7B-Uncensored,” exactly how it’s listed at the top of the model card.
Next to the title is a little copy icon. Click it, and it will save the properly formatted model name to your clipboard.
(Credit: Brian Westover/Hugging Face)
Back in WebUI, go to the model tab and enter that model name into the field labeled “Download custom model or LoRA.” Paste in the model name, hit Download, and the software will start downloading the necessary files from Hugging Face.
(Credit: Brian Westover/Oobabooga)
If successful, you’ll see an orange progress bar pop up in the WebUI window and several progress bars will appear in the command window you left open in the background.
(Credit: Brian Westover/Oobabooga)
(Credit: Brian Westover/Oobabooga)
Once it’s finished (again, be patient), the WebUI progress bar will disappear and it will simply say “Done!” instead.
Load Your Model and Settings in WebUI
Once you’ve got a model downloaded, you need to load it up in WebUI. To do this, select it from the drop-down menu at the upper left of the model tab. (If you have multiple models downloaded, this is where you choose one to use.)
Before you can use the model, you need to allocate some system or graphics memory (or both) to running it. While you can tweak and fine-tune nearly anything you want in these models, including memory allocation, I’ve found that setting it at roughly two-thirds of both GPU and CPU memory works best. That leaves enough unused memory for your other PC functions while still giving the LLM enough memory to track and hold a longer conversation.
(Credit: Brian Westover/Oobabooga)
Once you’ve allocated memory, hit the Save Settings button to save your choice, and it will default to that memory allocation every time. If you ever want to change it, you can simply reset it and press Save Settings again.
Enjoy Your LLM!
With your model loaded up and ready to go, it’s time to start chatting with your ChatGPT alternative. Navigate within WebUI to the Text Generation tab. Here you’ll see the actual text interface for chatting with the AI. Enter text into the box, hit Enter to send it, and wait for the bot to respond.
(Credit: Brian Westover/Oobabooga)
Here, we’ll say again, is where you’ll experience a little disappointment: Unless you’re using a super-duper workstation with multiple high-end GPUs and massive amounts of memory, your local LLM won’t be anywhere near as quick as ChatGPT or Google Bard. The bot will spit out fragments of words (called tokens) one at a time, with a noticeable delay between each.
However, with a little patience, you can have full conversations with the model you’ve downloaded. You can ask it for information, play chat-based games, even give it one or more personalities. Plus, you can use the LLM with the assurance that your conversations and data are private, which gives peace of mind.
You’ll encounter a ton of content and concepts to explore while starting with local LLMs. As you use WebUI and different models more, you’ll learn more about how they work. If you don’t know your text from your tokens, or your GPTQ from a LoRA, these are ideal places to start immersing yourself in the world of machine learning.
Like What You’re Reading?
Sign up for Tips & Tricks newsletter for expert advice to get the most out of your technology.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Technology
Steps to delete your personal information from the dark web

Have you ever wondered what to do if your identity is stolen and sold on the dark web? Many people face this scary situation every day, and they don’t know how to deal with it. That includes Mary of Upper Chichester, Pennsylvania.
“I have a major issue with my Motorola Android cell phone. Motorola’s tech department wasn’t helpful at all.
“I was alerted by Capital One that my identity was being sold on the dark web. I did contact all of the credit reporting agencies to notify them and place alerts on my credit report. That’s about all I have done so far. My issue is how do I remove my personal information from the dark web and is my phone now useless?
“Do I need to get a new phone or is there any easy way to secure my current phone?
Woman smiles at her Android (Cyberguy.com)
GET MORE OF MY TECH TIPS & EASY VIDEO TUTORIALS WITH THE FREE CYBERGUY NEWSLETTER – CLICK HERE
“I’m worried about someone using my personal information to commit criminal acts using my identity. Please tell me the easiest way to rectify this scary situation. What should I do next?”
Mary, Upper Chichester, PA
Mary, I’m sorry to hear that your identity was being sold on the dark web. I’m glad you contacted the credit reporting agencies to alert them and place alerts on your credit report. That’s one of several smart moves to protect your credit from fraud. As for removing your personal information from the dark web, fortunately, there are several ways to approach this, which we’ll get into below.
What do I do if my data has been stolen?
Log out of all of your accounts: If you see that your information was part of any sort of breach, you should first log out of all your accounts on every web browser on your computer. Once you’ve done that, you should clear your cookies and cache.
Change your password: If your password was compromised, be sure to change it immediately. Consider using a password manager to generate and store complex passwords.
MY TIPS AND BEST EXPERT-REVIEWED PASSWORD MANAGERS OF 2023 CAN BE FOUND HERE
Remove yourself from the internet: While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously. By doing so, it would significantly decrease the chances of a scammer being able to get you information to target you.

Hackers looking a computer. (Cyberguy.com)
HOW TO FIGHT BACK AGAINST DEBIT CARD HACKERS WHO ARE AFTER YOUR MONEY
SEE MY TIPS AND BEST PICKS FOR REMOVING YOUR PERSONAL INFORMATION FROM THE INTERNET
Invest in Antivirus protection: The best way to protect yourself from accidentally clicking a malicious link that would allow hackers access to your personal information is to have antivirus protection installed and actively running on all your devices.
See my expert review of the best antivirus protection for your Windows, Mac, Android & iOS devices.
Do you need a new phone if your personal info is on the dark web?
As for your Android phone, you should be sure to do a malware scan and implement necessary security measures to prevent hackers from accessing it again. Here are some steps you can take to secure your Android phone from hackers:
Do a malware scan of your Android device. You should scan your phone with reputable antivirus protection, and remove any suspicious apps or files.
- Phishing and malware are common tactics that hackers use to trick you into clicking on malicious links or attachments that can infect your Android phone with spyware or ransomware.
- You should be careful about opening emails, texts, or messages from unknown senders or sources that look suspicious or too good to be true.
- Avoid downloading apps from unofficial sources or websites that may contain malware.
Update your software: Make sure you have the latest version of Android and any apps you use on your phone. Software updates often fix security vulnerabilities that hackers can exploit. You can check for updates in your phone’s settings or in the Google Play Store. Learn how to update your Android or iPhone.
Use a strong password or PIN: Lock your phone with a password or PIN that is hard to guess or crack. You can also use biometric authentication, such as fingerprint or face recognition, if your phone supports it. You should also change your passwords and log out of any accounts that may have been compromised.
Enable two-factor authentication: Two-factor authentication (2FA) adds an extra layer of security to your online accounts by requiring you to enter a code or use an app to verify your identity when you log in. You can enable 2FA on services that offer it, such as Google, Facebook, Twitter, etc. You should also use a different device to receive the codes or use an authentication app like the ones described here.

Password protection service (Cyberguy.com)
THIS FACEBOOK MESSENGER PHISHING SCAM IS STEALING MILLIONS OF PASSWORDS
Kurt’s key takeaways
Mary’s story sheds light on the reality many face grappling with the nightmare of identity theft and the dark web. Quick action is key, like notifying credit agencies if you discover your info is being used or has been stolen.
Remember, once on the dark web, your personal info isn’t easily erased, but you can take these steps to start removing it all. So, when it comes to your phone, securing it with updates, antivirus software, strong passwords, and cautious behavior can and will help thwart potential hackers.
Safeguarding your identity is a constant battle. However, it’s just a reality of where we are today. So, staying proactive is your best armor.
What frustrates you most about having to always be on guard when it comes to your tech and security? Do you wish our government did more to find those responsible for perpetuating the dark web and its crimes? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Answers to the most asked CyberGuy questions:
Copyright 2023 CyberGuy.com. All rights reserved.
Technology
Google’s whiteboarding app is joining the graveyard
/cdn.vox-cdn.com/uploads/chorus_asset/file/8560261/jbareham_170509_1678_0144.0.jpg)
It’s never a dull day at the Google graveyard — the company has blogged a Workspace update today announcing the end of its collaborative Jamboard whiteboarding software. Google plans to wind down the app in late 2024 and is introducing the “next phase” of whiteboarding solutions: pointing users toward third-party apps that work with Workspace services like Google Meet, Drive, and Calendar.
Google says it will offer support to help transition customers to use other whiteboard tools, including FigJam, Lucidspark, and Miro. According to the blog post, Workspace customer feedback indicated the third-party solutions worked better for them thanks to feature offerings like an infinite canvas size, use case templates, voting, and more. So instead of further developing Jamboard, Google’s digging its hole and will focus on core collaboration services on Docs, Sheets, and Slides.
The Jamboard app will hit its first phase-out step on October 1st, 2024. On that day, Jamboard will become a read-only app, and users will no longer be able to make new or edit old Jams on any platform. Then users will have until December 31st, 2024, to back up Jam their files, and on that date, Google will cut off access and begin permanently deleting files. Google plans to provide “clear paths” to retain and migrate Jam data to FigJam, Lucidspark, and Miro “within just a few clicks.” The resources will be available “well before” the app winds down in late 2024.
You might remember Google had a $5,000 Jamboard whiteboarding meeting room display — well, that’s also discontinued. The Jamboard hardware will no longer receive software updates on September 30th, 2024, and its license subscriptions will expire the same day. Companies and schools with an upcoming renewal may remain subscribers up to that date at a prorated amount if they’d prefer to delay transitioning as long as possible. The 55-inch Jamboard device will reach end of life on October 1st, 2024.
If you need new whiteboarding hardware for your meeting rooms, Google suggests getting its Google Meet Series One screens: the Board 65 and the Desk 27. And Google will connect educational institutions with Figma, Lucid Software, and Miro to help them transition. Google can’t send outside solutions to the graveyard since it doesn’t own the solutions.
-
Technology1 week ago
What’s next for Windows and Surface without Panos Panay?
-
Movie Reviews1 week ago
‘The Peasants’ Review: ‘Loving Vincent’ Directors Return With a Sumptuous Animated Portrait of a Polish Village
-
Politics1 week ago
Fauci and wife’s net worth exceeded $11M when he departed government post, disclosures reveal
-
News1 week ago
5 Americans freed from prison in Iran land on U.S. soil
-
Movie Reviews1 week ago
The Hollywood Reporter Critics Pick the 15 Best Films of the Fall Fests
-
Technology1 week ago
Microsoft’s next Xbox, coming 2028, envisions hybrid computing
-
Technology1 week ago
Google quietly raised ad prices to boost search revenue, says executive
-
News1 week ago
Suspect arrested in ambush slaying of US deputy sheriff in California