Connect with us

Technology

I briefly played with Logitech’s new G Cloud Gaming Handheld

Published

on

I briefly played with Logitech’s new G Cloud Gaming Handheld

Yesterday, Logitech introduced its $349.99 G Cloud Gaming Handheld, which is popping out within the US on October seventeenth (till then, it’s $50 off to preorder). Right now, I received to briefly try it out. It was only a 10-minute demo, but it surely was lengthy sufficient for me to snap just a few pictures, launch some apps, and see the way it felt in my fingers. We’ll have a full overview within the coming weeks.

As I arrived on the testing station, Deathloop (freshly accessible on Xbox Sport Go) was streaming through Wi-Fi to the hand-held’s Xbox Cloud Gaming app. Sadly, it was the action-less intro sequence, however I nonetheless received to dash and soar round. Although it wasn’t a enjoyable killer, like all of my experiences with cloud sport streaming, there was only a whiff of enter lag that, not less than for me, is tough to disregard. On the plus facet, the G Cloud’s buttons, triggers, and analog stick structure really feel good. As for visible constancy, it’s robust to understand how a lot will be blamed on a congested Wi-Fi community, however the sport’s darkish environments regarded somewhat fuzzy on its seven-inch 1080p IPS panel.

The cloud model of Fortnite felt fairly good to play on the hand-held, even with a touch of enter latency.
Picture by Cameron Faulkner / The Verge

That wasn’t the case after I switched to Fortnite through the Nvidia GeForce Now app. Exiting Xbox Sport Go and booting into a brand new app was satisfactorily speedy. My preliminary impression is that in case your baseline expectations for velocity in a handheld encompass simply the Nintendo Swap, I believe you’ll most likely be impressed with how responsive the efficiency and interface navigation really feel — maybe not a lot should you’re coming from a Steam Deck. At its greatest, Fortnite on the G Cloud Gaming Handheld appears higher and runs smoother than it does on the Swap (not a really excessive bar, I do know), although that relies upon solely on the capabilities of your Wi-Fi community. After all, since that is an Android-based handheld, it’s most likely attainable to get precise Fortnite loaded onto this factor and never fear about the entire cloud side. Although, I’m undecided how properly it’d run with its Snapdragon 720G and 4GB of RAM.

The remainder of my time with the G Cloud Gaming Handheld was spent getting misplaced in its Android launcher that Tencent apparently assisted with in improvement, which feels ripped out of the Android Honeycomb days (although the unit that I examined was operating Android 11). It’s straightforward sufficient to search out your entire apps, other than the gaming-focused ones that it places entrance and middle. While you’re taking a look at your full app library, you’ll be able to click on a face button that serves as a portal to the Google Play Retailer, the place you’ll be able to obtain virtually something, I’d think about. Aesthetically, the consumer interface is attempting for a gamer-y vibe that didn’t completely click on with me.

A top view of the Logitech G Cloud Gaming Headset that shows its shoulder buttons, which are covered in a textured plastic.

The shoulder buttons and grips are lined in textured plastic to offer extra, properly, grip.
Picture by Cameron Faulkner / The Verge

The G Cloud Handheld is comfy to carry. The built-in grips supply a great quantity of palm help, and the textured plastic round its again and on the triggers is a pleasant contact. By way of ergonomics alone, I’d positively want to lose just a few hours enjoying video games on this than on the Swap. On the underside, there’s a headphone jack subsequent to a USB-C port that’s used primarily for charging. It could possibly’t help pushing video out to exterior displays — I requested — although it’ll work with USB-C audio transmitters for headsets that supply that sort of factor. On the highest left of the hand-held’s rail, there’s a quantity rocker subsequent to a sleep swap (you’ll be able to energy it down via the software program, as properly). And eventually, there’s a microSD card slot over on the fitting facet, subsequent to the fitting shoulder buttons.

Advertisement
This image showcases the volume and power buttons located on the Logitech G Cloud Gaming Handheld

There’s an influence slider subsequent to a quantity rocker alongside the highest rail of the hand-held.
Picture by Cameron Faulkner / The Verge

This handheld feels and appears properly designed, and it took no time in any respect for me to really feel like it is a gadget that I would like to spend so much extra time testing. Although, like most Logitech merchandise, polished because it feels, spending time with it didn’t change that I’m not a fan of its $349.99 retail value. It’s a must to be completely purchased in, not simply to this handheld however to the companies that you just need to play video games on. So, the associated fee solely goes up from there.

This image shows the charging port and 3.5mm headphone jack located on the bottom of the Logitech G Cloud Gaming handheld.

The hand-held doesn’t help video out through USB-C, however you’ll be able to plug in USB-C audio transmitters for wi-fi headsets, along with charging.
Picture by Cameron Faulkner / The Verge

Wanting outdoors of this handheld, it’s actually tough to underplay how a lot worth a few of the different well-liked handheld consoles supply proper now, together with the $199 Swap Lite or the extra succesful $299 Swap that may connect with a TV. To not point out, the Steam Deck’s $399 beginning value is a tempting various if you wish to play PC video games on the go. Even so, Android tablets common into handhelds which are available for buy are simply unusual sufficient that the G Cloud Gaming Handheld might be a success. We’ll must see.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Watch Linda Yaccarino’s wild interview at the Code Conference

Published

on

Watch Linda Yaccarino’s wild interview at the Code Conference

On Wednesday evening, X CEO Linda Yaccarino appeared onstage at the Code Conference with frustration and protest. “I think many people in this room were not fully prepared for me to still come out on the stage,” she told interviewer Julia Boorstin, senior media and tech correspondent at CNBC.

Yaccarino sounded rattled. She’d found out earlier in the day that Kara Swisher, a Code Conference co-founder, had booked a surprise guest to appear an hour before her: Yoel Roth, Twitter’s former head of trust and safety. He has been an outspoken critic of the direction Elon Musk has taken the site.

In his interview with Swisher, Roth recounted how Musk put him personally in danger. Musk suggested on Twitter that Roth had advocated for sexualizing children — a completely unfounded claim — which led to death threats and his address being posted online. “I had to sell my house. I had to move,” Roth said. He encouraged Yaccarino to think about how Musk could turn on her, too, and said the site was bleeding users and advertisers.

These criticisms are nothing new, but Yaccarino was visibly bothered by having to appear shortly after a well-known critic of her company. “I’d be happy to respond,” Yaccarino said. “I think I’ve been given about 45 minutes [of notice].” The conference’s 300-some-seat ballroom was packed for her appearance; I caught Swisher reclining on a couch in the back before things kicked off, waiting to see the results of her surprise play out.

“I work at X, he worked at Twitter.”

Advertisement

Throughout the interview, Yaccarino repeated that she’s only been on the job at X for 12 weeks, as if to say there’s only so much she could have done by now. But in that time, she’s managed to do one thing consistently: dismiss concerns about X, whether it’s the platform’s disinvestment in moderation or Musk’s chaotic leadership.

Her dismissive stance was very much on display Wednesday night. She wrote off Roth’s claims about the platform’s performance as outdated (“I work at X, he worked at Twitter,” she said); she said the Anti-Defamation League — which Musk is threatening to sue — pays too much attention to the antisemitism on X and not enough to the improvements the platform has made; and she argued that despite the panic around advertisers fleeing, most of the big ones are coming back.

These were not satisfying answers if you’re a person who thinks Musk is destroying Twitter or stoking harassment. But they were, for the most part, confident answers. Yaccarino answered slowly and carefully, and she seemed determined to push the complaints aside as collateral damage for reinventing the platform.

“X is a new company building a foundation based on free expression and freedom of speech,” she said at the start.

“We talk about everything.”

Advertisement

Boorstin prodded Yaccarino for hard numbers on how X is doing amid all these crises. Yaccarino said the company would be profitable “in early ’24.” The platform has 200–250 million daily active users. “Something like that,” Yaccarino said. She went to check her phone, as if to confirm the number but never finished checking. Later, she suggested that X has 540 million monthly active users and 225 million daily active users. (That would be slightly down from the 238 million daily users Twitter had before the acquisition.)

It was hard to know whether Yaccarino wasn’t prepared enough or if she simply didn’t want to give definitive answers. At one point, Boorstin asked about Musk’s recent statement that X will eventually charge all users to post on the platform, and Yaccarino appeared unable to speak to the proposed change.

“Can you repeat?” Yaccarino asked.

“Elon Musk announced you’re moving to an entirely subscription-based service,” Boorstin said. “Nothing free about using X.”

“Did he say we were moving to it specifically or is thinking about it?” Yaccarino asked.

Advertisement

“He said that’s the plan,” Boorstin said. “Did he consult you before he announced that?”

“We talk about everything,” Yaccarino said. She never clarified X’s plans.

The interview was filled with rocky moments like this, but Yaccarino didn’t run. As the clock ran out on the interview, Boorstin said she would let Yaccarino stay as long as she wanted — and Yaccarino took her up on it. Even when Yaccarino finally said she needed to leave, she stuck around for a few more questions. At close to 40 minutes, it was the longest interview of the conference.

It was a long conversation and a far more complicated one than Yaccarino may have wanted to have. At the top of the interview, she tried to push aside concerns about X with a statement of confidence. “It’s a new day at X, and I’ll leave it at that,” she said.

But it could never be that simple. As long as Musk is in the mix, she’ll always need more time to explain whatever’s going on at X.

Advertisement
Continue Reading

Technology

How to Run Your Own ChatGPT-Like LLM for Free (and in Private)

Published

on

How to Run Your Own ChatGPT-Like LLM for Free (and in Private)

The power of large language models (LLMs) such as ChatGPT, generally made possible by cloud computing, is obvious, but have you ever thought about running an AI chatbot on your own laptop or desktop? Depending on how modern your system is, you can likely run LLMs on your own hardware. But why would you want to?

Well, maybe you want to fine-tune a tool for your own data. Perhaps you want to keep your AI conversations private and offline. You may just want to see what AI models can do without the companies running cloud servers shutting down any conversation topics they deem unacceptable. With a ChatGPT-like LLM on your own hardware, all of these scenarios are possible.

And hardware is less of a hurdle than you might think. The latest LLMs are optimized to work with Nvidia graphics cards and with Macs using Apple M-series processors—even low-powered Raspberry Pi systems. And as new AI-focused hardware comes to market, like the integrated NPU of Intel’s “Meteor Lake” processors or AMD’s Ryzen AI, locally run chatbots will be more accessible than ever before.

Thanks to platforms like Hugging Face and communities like Reddit’s LocalLlaMA, the software models behind sensational tools like ChatGPT now have open-source equivalents—in fact, more than 200,000 different models are available at this writing. Plus, thanks to tools like Oobabooga’s Text Generation WebUI, you can access them in your browser using clean, simple interfaces similar to ChatGPT, Bing Chat, and Google Bard.


The software models behind sensational tools like ChatGPT now have open-source equivalents—in fact, more than 200,000 different models are available.

Advertisement

So, in short: Locally run AI tools are freely available, and anyone can use them. However, none of them is ready-made for non-technical users, and the category is new enough that you won’t find many easy-to-digest guides or instructions on how to download and run your own LLM. It’s also important to remember that a local LLM won’t be nearly as fast as a cloud-server platform because its resources are limited to your system alone.

Nevertheless, we’re here to help the curious with a step-by-step guide to setting up your own ChatGPT alternative on your own PC. Our guide uses a Windows machine, but the tools listed here are generally available for Mac and Linux systems as well, though some extra steps may be involved when using different operating systems.


Some Warnings About Running LLMs Locally

First, however, a few caveats—scratch that, a lot of caveats. As we said, these models are free, made available by the open-source community. They rely on a lot of other software, which is usually also free and open-source. That means everything is maintained by a hodgepodge of solo programmers and teams of volunteers, along with a few massive companies like Facebook and Microsoft. The point is that you’ll encounter a lot of moving parts, and if this is your first time working with open-source software, don’t expect it to be as simple as downloading an app on your phone. Instead, it’s more like installing a bunch of software before you can even think about downloading the final app you want—which then still may not work. And no matter how thorough and user-friendly we try to make this guide, you may run into obstacles that we can’t address in a single article.

Also, finding answers can be a real pain. The online communities devoted to these topics are usually helpful in solving problems. Often, someone’s solved the problem you’re encountering in a conversation you can find online with a little searching. But where is that conversation? It might be on Reddit, in an FAQ, on a GitHub page, in a user forum on HuggingFace, or somewhere else entirely. 


AI is quicksand. Everything moves whip-fast, and the environment undergoes massive shifts on a constant basis.

Advertisement

It’s worth repeating that open-source AI is moving fast. Every day new models are released, and the tools used to interact with them change almost as often, as do the underlying training methods and data, and all the software undergirding that. As a topic to write about or to dive into, AI is quicksand. Everything moves whip-fast, and the environment undergoes massive shifts on a constant basis. So much of the software discussed here may not last long before newer and better LLMs and clients are released.

Bottom line: Proceed at your own risk. There’s no Geek Squad to call for help with open-source software; it’s not all professionally maintained; and you’ll find no handy manual to read or customer service department to turn to—just a bunch of loosely organized online communities.

Finally, once you get it all running, these AI models have varying degrees of polish, but they all carry the same warnings: Don’t trust what they say at face value, because it’s often wrong. Never look to an AI chatbot to help make your health or financial decisions. The same goes for writing your school essays or your website articles. Also, if the AI says something offensive, try not to take it personally. It’s not a person passing judgment or spewing questionable opinions; it’s a statistical word generator made to spit out mostly legible sentences. If any of this sounds too scary or tedious, this may not be a project for you.


Select Your Hardware

Before you begin, you’ll need to know a few things about the machine on which you want to run an LLM. Is it a Windows PC, a Mac, or a Linux box? This guide, again, will focus on Windows, but most of the resources referenced offer additional options and instructions for other operating systems.

You also need to know whether your system has a discrete GPU or relies on its CPU’s integrated graphics. Plenty of open-source LLMs can run solely on your CPU and system memory, but most are made to leverage the processing power of a dedicated graphics chip and its extra video RAM. Gaming laptops, desktops, and workstations are better suited to these applications, since they have the powerful graphics hardware these models often rely on.

Advertisement

Lenovo Legion Pro 7i Gen 8

Gaming laptops and mobile workstations offer the best hardware for running LLMs at home. (Credit: Molly Flores)

In our case, we’re using a Lenovo Legion Pro 7i Gen 8 gaming notebook, which combines a potent Intel Core i9-13900HX CPU, 32GB of system RAM, and a powerful Nvidia GeForce RTX 4080 mobile GPU with 12GB of dedicated VRAM.

If you’re on a Mac or Linux system, are CPU-dependent, or are using AMD instead of Intel hardware, be aware that while the general steps in this guide are correct, you may need extra steps and additional or different software to install. And the performance you see could be markedly different from what we discuss here.


Set Up Your Environment and Required Dependencies

To start, you must download some necessary software: Microsoft Visual Studio 2019. Any updated version of Visual Studio 2019 will work (though not newer annualized releases), but we recommend getting the latest version directly from Microsoft.

Microsoft Visual Studio 2019 download page

(Credit: Brian Westover/Microsoft)

Advertisement

Personal users will be fine to skip the Enterprise and Professional versions and use just the BuildTools version of the software.

Microsoft Visual Studio 2019 download page

Find the latest version of Visual Studio 2019 and download the BuildTools version (Credit: Brian Westover/Microsoft)

After choosing that, be sure to select “Desktop Development with C++.” This step is essential in order for other pieces of software to work properly.

Microsoft Visual Studio 2019 download selection

Be sure to select “Desktop development with C++.” (Credit: Brian Westover/Microsoft)

Begin your download and kick back: Depending on your internet connection, it could take several minutes before the software is ready to launch.

Advertisement

Microsoft Visual Studio 2019 download progress

(Credit: Brian Westover/Microsoft)


Download Oobabooga’s Text Generation WebUI Installer

Next, you need to download the Text Generation WebUI tool from Oobabooga. (Yes, it’s a silly name, but the GitHub project makes an easy-to-install and easy-to-use interface for AI stuff, so don’t get hung up on the moniker.)

Text Generation WebUI tool from Oobabooga GitHub page

(Credit: Brian Westover/Oobabooga)

To download the tool, you can either navigate through the GitHub page or go directly to the collection of one-click installers Oobabooga has made available. We’ve installed the Windows version, but this is also where you’ll find installers for Linux and macOS. Download the zip file shown below.

One-click installer packages for Text Generation WebUI

(Credit: Brian Westover/Oobabooga)

Advertisement

Create a new file folder someplace on your PC that you’ll remember and name it AI_Tools or something similar. Do not use any spaces in the folder name, since that will mess up some of the automated download and install processes of the installer.

Local folder for Text Generation WebUI files

(Credit: Brian Westover/Microsoft)

Then, extract the contents of the zip file you just downloaded into your new AI_Tools folder.


Run the Text Generation WebUI Installer

Once the zip file has been extracted to your new folder, look through the contents. You should see several files, including one called start_windows.bat. Double-click it to begin installation.

Depending on your system settings, you might get a warning about Windows Defender or another security tool blocking this action, because it’s not from a recognized software vendor. (We haven’t experienced or seen anything reported online to indicate that there’s any problem with these files, but we’ll repeat that you do this at your own risk.) If you wish to proceed, select “More info” to confirm whether you want to run start_windows.bat. Click “Run Anyway” to continue the installation.

Advertisement

Installation warning

(Credit: Brian Westover/Microsoft)

Now, the installer will open up a command prompt (CMD) and begin installing the dozens of software pieces necessary to run the Text Generation WebUI tool. If you’re unfamiliar with the command-line interface, just sit back and watch.

Installation CMD window

(Credit: Brian Westover/Microsoft)

First, you’ll see a lot of text scroll by, followed by simple progress bars made up of hashtag or pound symbols, and then a text prompt will appear. It will ask you what your GPU is, giving you a chance to indicate whether you’re using Nvidia, AMD, or Apple M series silicon or just a CPU alone. You should already have figured this out before downloading anything. In our case, we select A, because our laptop has an Nvidia GPU.

Installation GPU check

(Credit: Brian Westover/Microsoft)

Advertisement

Once you’ve answered the question, the installer will handle the rest. You’ll see plenty of text scroll by, followed first by simple text progress bars and then by more graphically pleasing pink and green progress bars as the installer downloads and sets up everything it needs.

Text Generation WebUI installation progress

(Credit: Brian Westover/Microsoft)

At the end of this process (which may take up to an hour), you’ll be greeted by a warning message surrounded by asterisks. This warning will tell you that you haven’t downloaded any large language model yet. That’s good news! It means that Text Generation WebUI is just about done installing.

Text Generation WebUI tool "no model" warning

(Credit: Brian Westover/Microsoft)

At this point you’ll see some text in green that reads “Info: Loading the extension gallery.” Your installation is complete, but don’t close the command window yet.

Advertisement

Text Generation WebUI installer green text

(Credit: Brian Westover/Microsoft)


Copy and Paste the Local Address for WebUI 

Immediately below the green text, you’ll see another line that says “Running on local URL: http://127.0.01:7860.” Just click that URL text, and it will open your web browser, serving up the Text Generation WebUI—your interface for all things LLM.

Text Generation WebUI installer green text and local URL

(Credit: Brian Westover/Microsoft)

You can save this URL somewhere or bookmark it in your browser. Even though Text Generation WebUI is accessed through your browser, it runs locally, so it’ll work even if your Wi-Fi is turned off. Everything in this web interface is local, and the data generated should be private to you and your machine.

Text Generation WebUI open in browser

(Credit: Brian Westover/Oobabooga)

Advertisement

Close and Reopen WebUI

Once you’ve successfully accessed the WebUI to confirm it’s installed correctly, go ahead and close both the browser and your command window.

In your AI_Tools folder, open up the same start_windows batch file that we ran to install everything. It will reopen the CMD window but, instead of going through that whole installation process, will load up a small bit of text including the green text from before telling you that the extension gallery is loaded. That means the WebUI is ready to open again in your browser.

Text Generation WebUI open in browser

(Credit: Brian Westover/Oobabooga)

Use the same local URL you copied or bookmarked earlier, and you’ll be greeted once again by the WebUI interface. This is how you will open the tool in the future, leaving the CMD window open in the background.


Select and Download an LLM

Now that you have the WebUI installed and running, it’s time to find a model to load. As we said, you’ll find thousands of free LLMs you can download and use with WebUI, and the process of installing one is pretty straightforward.

Advertisement

If you want a curated list of the most recommended models, you can check out a community like Reddit’s /r/LocalLlaMA, which includes a community wiki page that lists several dozen models. It also includes information about what different models are built for, as well as data about which models are supported by different hardware. (Some LLMs specialize in coding tasks, while others are built for natural text chat.)

These lists will all end up sending you to Hugging Face, which has become a repository of LLMs and resources. If you came here from Reddit, you were probably directed straight to a model card, which is a dedicated information page about a specific downloadable model. These cards provide general information (like the datasets and training techniques that were used), a list of files to download, and a community page where people can leave feedback as well as request help and bug fixes.

At the top of each model card is a big, bold model name. In our case, we used the the WizardLM 7B Uncensored model made by Eric Hartford. He uses the screen name ehartford, so the model’s listed location is “ehartford/WizardLM-7B-Uncensored,” exactly how it’s listed at the top of the model card.

Next to the title is a little copy icon. Click it, and it will save the properly formatted model name to your clipboard.

Hugging Face LLM model card

(Credit: Brian Westover/Hugging Face)

Advertisement

Back in WebUI, go to the model tab and enter that model name into the field labeled “Download custom model or LoRA.” Paste in the model name, hit Download, and the software will start downloading the necessary files from Hugging Face.

Text Generation WebUI model download

(Credit: Brian Westover/Oobabooga)

If successful, you’ll see an orange progress bar pop up in the WebUI window and several progress bars will appear in the command window you left open in the background.

Text Generation WebUI model download progress

(Credit: Brian Westover/Oobabooga)

CMD window model download progress

(Credit: Brian Westover/Oobabooga)

Advertisement

Once it’s finished (again, be patient), the WebUI progress bar will disappear and it will simply say “Done!” instead.


Load Your Model and Settings in WebUI

Once you’ve got a model downloaded, you need to load it up in WebUI. To do this, select it from the drop-down menu at the upper left of the model tab. (If you have multiple models downloaded, this is where you choose one to use.)

Before you can use the model, you need to allocate some system or graphics memory (or both) to running it. While you can tweak and fine-tune nearly anything you want in these models, including memory allocation, I’ve found that setting it at roughly two-thirds of both GPU and CPU memory works best. That leaves enough unused memory for your other PC functions while still giving the LLM enough memory to track and hold a longer conversation.

Text Generation WebUI model settings

(Credit: Brian Westover/Oobabooga)

Once you’ve allocated memory, hit the Save Settings button to save your choice, and it will default to that memory allocation every time. If you ever want to change it, you can simply reset it and press Save Settings again.

Advertisement

Enjoy Your LLM!

With your model loaded up and ready to go, it’s time to start chatting with your ChatGPT alternative. Navigate within WebUI to the Text Generation tab. Here you’ll see the actual text interface for chatting with the AI. Enter text into the box, hit Enter to send it, and wait for the bot to respond.

Text Generation WebUI with model running

(Credit: Brian Westover/Oobabooga)

Here, we’ll say again, is where you’ll experience a little disappointment: Unless you’re using a super-duper workstation with multiple high-end GPUs and massive amounts of memory, your local LLM won’t be anywhere near as quick as ChatGPT or Google Bard. The bot will spit out fragments of words (called tokens) one at a time, with a noticeable delay between each.

However, with a little patience, you can have full conversations with the model you’ve downloaded. You can ask it for information, play chat-based games, even give it one or more personalities. Plus, you can use the LLM with the assurance that your conversations and data are private, which gives peace of mind.

You’ll encounter a ton of content and concepts to explore while starting with local LLMs. As you use WebUI and different models more, you’ll learn more about how they work. If you don’t know your text from your tokens, or your GPTQ from a LoRA, these are ideal places to start immersing yourself in the world of machine learning.

Advertisement

Like What You’re Reading?

Sign up for Tips & Tricks newsletter for expert advice to get the most out of your technology.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Advertisement
Continue Reading

Technology

Steps to delete your personal information from the dark web

Published

on

Steps to delete your personal information from the dark web

Have you ever wondered what to do if your identity is stolen and sold on the dark web? Many people face this scary situation every day, and they don’t know how to deal with it. That includes Mary of Upper Chichester, Pennsylvania.

“I have a major issue with my Motorola Android cell phone. Motorola’s tech department wasn’t helpful at all.

“I was alerted by Capital One that my identity was being sold on the dark web. I did contact all of the credit reporting agencies to notify them and place alerts on my credit report. That’s about all I have done so far. My issue is how do I remove my personal information from the dark web and is my phone now useless?

“Do I need to get a new phone or is there any easy way to secure my current phone?

Woman smiles at her Android (Cyberguy.com)

Advertisement

GET MORE OF MY TECH TIPS & EASY VIDEO TUTORIALS WITH THE FREE CYBERGUY NEWSLETTER – CLICK HERE

“I’m worried about someone using my personal information to commit criminal acts using my identity. Please tell me the easiest way to rectify this scary situation. What should I do next?”

Mary, Upper Chichester, PA

Mary, I’m sorry to hear that your identity was being sold on the dark web. I’m glad you contacted the credit reporting agencies to alert them and place alerts on your credit report. That’s one of several smart moves to protect your credit from fraud. As for removing your personal information from the dark web, fortunately, there are several ways to approach this, which we’ll get into below.

What do I do if my data has been stolen?

Advertisement

Log out of all of your accounts: If you see that your information was part of any sort of breach, you should first log out of all your accounts on every web browser on your computer. Once you’ve done that, you should clear your cookies and cache.

Change your password: If your password was compromised, be sure to change it immediately. Consider using a password manager to generate and store complex passwords.

MY TIPS AND BEST EXPERT-REVIEWED PASSWORD MANAGERS OF 2023 CAN BE FOUND HERE

Remove yourself from the internet: While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously. By doing so, it would significantly decrease the chances of a scammer being able to get you information to target you.

Hacker on a computer

Hackers looking a computer. (Cyberguy.com)

HOW TO FIGHT BACK AGAINST DEBIT CARD HACKERS WHO ARE AFTER YOUR MONEY

Advertisement

SEE MY TIPS AND BEST PICKS FOR REMOVING YOUR PERSONAL INFORMATION FROM THE INTERNET

Invest in Antivirus protection: The best way to protect yourself from accidentally clicking a malicious link that would allow hackers access to your personal information is to have antivirus protection installed and actively running on all your devices.

See my expert review of the best antivirus protection for your Windows, Mac, Android & iOS devices.

Do you need a new phone if your personal info is on the dark web?

As for your Android phone, you should be sure to do a malware scan and implement necessary security measures to prevent hackers from accessing it again. Here are some steps you can take to secure your Android phone from hackers:

Advertisement

Do a malware scan of your Android device. You should scan your phone with reputable antivirus protection, and remove any suspicious apps or files.

  • Phishing and malware are common tactics that hackers use to trick you into clicking on malicious links or attachments that can infect your Android phone with spyware or ransomware.
  • You should be careful about opening emails, texts, or messages from unknown senders or sources that look suspicious or too good to be true.
  • Avoid downloading apps from unofficial sources or websites that may contain malware.

Update your software: Make sure you have the latest version of Android and any apps you use on your phone. Software updates often fix security vulnerabilities that hackers can exploit. You can check for updates in your phone’s settings or in the Google Play Store. Learn how to update your Android or iPhone.

Use a strong password or PIN: Lock your phone with a password or PIN that is hard to guess or crack. You can also use biometric authentication, such as fingerprint or face recognition, if your phone supports it. You should also change your passwords and log out of any accounts that may have been compromised.

Enable two-factor authentication: Two-factor authentication (2FA) adds an extra layer of security to your online accounts by requiring you to enter a code or use an app to verify your identity when you log in. You can enable 2FA on services that offer it, such as Google, Facebook, Twitter, etc. You should also use a different device to receive the codes or use an authentication app like the ones described here.

Password

Password protection service (Cyberguy.com)

THIS FACEBOOK MESSENGER PHISHING SCAM IS STEALING MILLIONS OF PASSWORDS

Kurt’s key takeaways

Advertisement

Mary’s story sheds light on the reality many face grappling with the nightmare of identity theft and the dark web. Quick action is key, like notifying credit agencies if you discover your info is being used or has been stolen.

Remember, once on the dark web, your personal info isn’t easily erased, but you can take these steps to start removing it all. So, when it comes to your phone, securing it with updates, antivirus software, strong passwords, and cautious behavior can and will help thwart potential hackers.

Safeguarding your identity is a constant battle. However, it’s just a reality of where we are today. So, staying proactive is your best armor.

What frustrates you most about having to always be on guard when it comes to your tech and security? Do you wish our government did more to find those responsible for perpetuating the dark web and its crimes? Let us know by writing us at Cyberguy.com/Contact.

For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Advertisement

Answers to the most asked CyberGuy questions:

Copyright 2023 CyberGuy.com. All rights reserved.

Continue Reading
Advertisement

Trending