Connect with us

Technology

183 million email passwords leaked: Check yours now

Published

on

183 million email passwords leaked: Check yours now

NEWYou can now listen to Fox News articles!

A massive online leak has exposed more than 183 million stolen email passwords gathered from years of malware infections, phishing campaigns and older data breaches. Cybersecurity experts say it is one of the largest compilations of stolen credentials ever discovered.

Security researcher Troy Hunt, who runs the website Have I Been Pwned, found the 3.5-terabyte dataset online. The credentials came from infostealer malware and credential stuffing lists. This malware secretly collects usernames, passwords and website logins from infected devices.

Researchers say the data contains both old and newly discovered credentials. Hunt confirmed that 91% of the data had appeared in previous breaches, but about 16.4 million email addresses were completely new to any known dataset.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

DISCORD CONFIRMS VENDOR BREACH EXPOSED USER IDS IN RANSOM PLOT

Cyber experts uncovered a 3.5-terabyte data dump containing millions of stolen logins. (Kurt “CyberGuy” Knutsson)

The real risk behind the password leak

The leak puts millions of users at risk. Hackers often collect stolen logins from multiple sources and combine them into large databases that circulate on dark web forums, Telegram channels and Discord servers.

If you have reused passwords across multiple sites, attackers can use this data to break into your accounts through credential stuffing. This method tests stolen username and password pairs on many different platforms.

The risk remains real for anyone using old or repeated credentials. One compromised password can unlock social media, banking and cloud accounts.

Advertisement

GOOGLE CONFIRMS DATA STOLEN IN BREACH BY KNOWN HACKER GROUP

Researcher Troy Hunt traced the leak to malware that secretly steals passwords from infected devices. (Jens Büttner/picture alliance via Getty Images)

Google responds to the reports

Google confirmed there was no Gmail data breach. In a post on X, the company stated “reports of a Gmail security breach impacting millions of users are false. Gmail’s defenses are strong, and users remain protected.”

Google clarified that the leak came from infostealer databases that compile years of stolen credentials from across the web. These databases are often mistaken for new breaches when, in fact, they represent ongoing theft activity. Troy Hunt also confirmed the dataset originated from Synthient’s collection of infostealer logs, not from a single platform or recent attack. While no new breach occurred, experts warn that leaked credentials remain dangerous because cybercriminals reuse them for future attacks.

How to check if you were exposed

To see if your email was affected, visit Have I Been Pwned. It is the first and official source for this newly added dataset. Enter your email address to find out if your information appears in the Synthient leak.

Advertisement

Many password managers also include built-in breach scanners that use the same data sources. However, they may not yet include this new collection until their databases update.

If your address shows up, treat it as compromised. Change your passwords immediately and turn on stronger security features to protect your accounts.

COLUMBIA UNIVERSITY DATA BREACH HITS 870,000 PEOPLE

The 183 million exposed credentials came from malware, phishing and old data breaches. (Kurt “CyberGuy” Knutsson)

9 steps to protect yourself now

Protecting your online life starts with consistent action. Each step below adds another layer of defense against hackers, malware and credential theft.

Advertisement

1) Change your passwords immediately

Start with your most important accounts, such as email and banking. Use strong, unique passwords with letters, numbers and symbols. Avoid predictable choices like names or birthdays. 

Never reuse passwords. One stolen password can unlock multiple accounts. Each login should be unique to protect your data.

A password manager makes this simple. It stores complex passwords securely and helps you create new ones. Many managers also scan for breaches to see if your current passwords have been exposed.

Next, check whether your email has been caught in a recent credential leak. Our No. 1 password manager pick includes a built-in Breach Scanner that searches trusted databases, including the newly added Synthient data from Have I Been Pwned. It helps you find out if your email or passwords have appeared in any known leaks. If you see a match, change any reused passwords right away and secure those accounts with strong, unique credentials.

Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.

Advertisement

2) Enable two-factor authentication (2FA)

Turn on 2FA wherever possible. It adds a powerful second layer of defense that blocks intruders even if they have your password. You will receive a code by text, app or security key. That code ensures only you can log in to your accounts.

3) Use an identity theft service for continuous monitoring

Identity Theft companies can monitor personal information like your Social Security number (SSN), phone number and email address, and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. It’s a smart way to stay one step ahead of hackers.

See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

4) Protect your devices with strong antivirus software 

Infostealer malware hides inside fake downloads and phishing attachments. A strong antivirus software scans your devices to stop threats before they spread. Keep your antivirus updated and run frequent scans. Even one unprotected device can put your whole digital life at risk.

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Advertisement

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

5) Avoid saving logins in your web browser

Browsers are convenient but risky. Infostealer malware often targets saved passwords in your web browser. 

6) Keep software updated

Updates fix security flaws that hackers exploit. Turn on automatic updates for your operating system, antivirus and apps. Staying current keeps threats out. 

7) Download only from trusted sources

Avoid unknown websites that offer free downloads. Fake apps and files often contain hidden malware. Use official app stores or verified company websites. 

8) Review your account activity often

Check your accounts regularly for unusual logins or device connections. Many platforms show a login history. If something looks off, change your password and enable 2FA immediately.

Advertisement

9) Consider a personal data removal service

The massive leak of 183 million credentials shows just how far your personal information can spread and how easily it can resurface years later in aggregated hacker databases. Even if your passwords were part of an old breach, data like your name, email, phone number or address may still be available through data broker sites. Personal data removal services can help reduce your exposure by scrubbing this information from hundreds of these sites.

While no service can guarantee total removal, they drastically reduce your digital footprint, making it harder for scammers to cross-reference leaked credentials with public data to impersonate or target you. These services monitor and automatically remove your personal info over time, which gives me peace of mind in today’s threat landscape.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

Advertisement

Kurt’s key takeaways

This leak highlights the ongoing danger of malware and password reuse. Prevention remains the best defense. Use unique passwords, enable 2FA and stay alert to keep your data safe. Visit Have I Been Pwned today to check your email and take action. The faster you respond, the better you protect your identity.

Have you ever discovered your data in a breach? What did you do next? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.   

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd

Published

on

You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd

Billy Woods has one of the highest batting averages in the game. Between his solo records like Hiding Places and Maps, and his collaborative albums with Elucid as Armand Hammer, the man has multiple stone-cold classics under his belt. And, while no one would ever claim that Woods’ albums were light-hearted fare (these are not party records), Golliwog represents his darkest to date.

This is not your typical horrorcore record. Others, like Geto Boys, Gravediggaz, and Insane Clown Posse, reach for slasher aesthetics and shock tactics. But what Billy Woods has crafted is more A24 than Blumhouse.

Sure, the first track is called “Jumpscare,” and it opens with the sound of a film reel spinning up, followed by a creepy music box and the line: “Ragdoll playing dead. Rabid dog in the yard, car won’t start, it’s bees in your head.” It’s setting you up for the typical horror flick gimmickry. But by the end, it’s psychological torture. A cacophony of voices forms a bed for unidentifiable screeching noises, and Woods drops what feels like a mission statement:

“The English language is violence, I hotwired it. I got a hold of the master’s tools and got dialed in.”

Throughout the record, Woods turns to his producers to craft not cheap scares, but tension, to make the listener feel uneasy. “Waterproof Mascara” turns a woman’s sobs into a rhythmic motif. On “Pitchforks & Halos” Kenny Segal conjures the aural equivalent of a POV shot of a serial killer. And “All These Worlds are Yours” produced by DJ Haram has more in common with the early industrial of Throbbing Gristle than it does even some of the other tracks on the record, like “Golgotha” which pairs boombap drums with New Orleans funeral horns.

That dense, at times scattered production is paired with lines that juxtapose the real-world horrors of oppression and colonialism, with scenes that feel taken straight from Bring Her Back: “Trapped a housefly in an upside-down pint glass and waited for it to die.” And later, Woods seamlessly transitions from boasting to warning people about turning their backs on the genocide in Gaza on “Corinthians”:

Advertisement

If you never came back from the dead you can’t tell me shit
Twelve billion USD hovering over the Gaza Strip
You don’t wanna know what it cost to live
What it cost to hide behind eyelids
When your back turnt, secret cannibals lick they lips

The record features some of Woods’ deftest lyricism, balancing confrontation with philosophy, horror with emotion. Billy Woods’ Golliwog is available on Bandcamp and on most major streaming services, including Apple Music, Qobuz, Deezer, YouTube Music, and Spotify.

Continue Reading

Technology

Grok AI scandal sparks global alarm over child safety

Published

on

Grok AI scandal sparks global alarm over child safety

NEWYou can now listen to Fox News articles!

Grok, the built-in chatbot on X, is facing intense scrutiny after acknowledging it generated and shared an AI image depicting two young girls in sexualized attire.

In a public post on X, Grok admitted the content “violated ethical standards” and “potentially U.S. laws on child sexual abuse material (CSAM).” The chatbot added, “It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”

That admission alone is alarming. What followed revealed a far broader pattern.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

The fallout from this incident has triggered global scrutiny, with governments and safety groups questioning whether AI platforms are doing enough to protect children.  (Silas Stein/picture alliance via Getty Images)

Grok quietly restricts image tools to paying users after backlash

As criticism mounted, Grok confirmed it has begun limiting image generation and editing features to paying subscribers only. In a late-night reply on X, the chatbot stated that image tools are now locked behind a premium subscription, directing users to sign up to regain access.

The apology that raised more questions

Grok’s apology appeared only after a user prompted the chatbot to write a heartfelt explanation for people lacking context. In other words, the system did not proactively address the issue. It responded because someone asked it to.

Around the same time, researchers and journalists uncovered widespread misuse of Grok’s image tools. According to monitoring firm Copyleaks, users were generating nonconsensual, sexually manipulated images of real women, including minors and well-known figures.

Advertisement

After reviewing Grok’s publicly accessible photo feed, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute, based on images involving real people with no clear indication of consent. The firm says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.

Copyleaks CEO and co-founder Alon Yamin said, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”

PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS

Grok admitted it generated and shared an AI image that violated ethical standards and may have broken U.S. child protection laws. (Kurt “CyberGuy” Knutsson)

Sexualized images of minors are illegal

This is not a gray area. Generating or distributing sexualized images of minors is a serious criminal offense in the United States and many other countries. Under U.S. federal law, such content is classified as child sexual abuse material. Penalties can include five to 20 years in prison, fines up to $250,000 and mandatory sex offender registration. Similar laws apply in the U.K. and France.

Advertisement

In 2024, a Pennsylvania man received nearly eight years in prison for creating and possessing deepfake CSAM involving child celebrities. That case set a clear precedent. Grok itself acknowledged this legal reality in its post, stating that AI images depicting minors in sexualized contexts are illegal.

The scale of the problem is growing fast

A July report from the Internet Watch Foundation, a nonprofit that tracks and removes child sexual abuse material online, shows how quickly this threat is accelerating. Reports of AI-generated child sexual abuse imagery jumped by 400% in the first half of 2025 alone. Experts warn that AI tools lower the barrier to potential abuse. What once required technical skill or access to hidden forums can now happen through a simple prompt on a mainstream platform.

Real people are being targeted

The harm is not abstract. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X. In multiple documented cases, Grok fully complied. Even more disturbing, users targeted images of a 14-year-old actress Nell Fisher from the Netflix series “Stranger Things.” Grok later admitted there were isolated cases in which users received images depicting minors in minimal clothing. In another Reuters investigation, a Brazilian musician described watching AI-generated bikini images of herself spread across X after users prompted Grok to alter a harmless photo. Her experience mirrors what many women and girls are now facing.

Governments respond worldwide

The backlash has gone global. In France, multiple ministers referred X to an investigative agency over possible violations of the EU’s Digital Services Act, which requires platforms to prevent and mitigate the spread of illegal content. Violations can trigger heavy fines. In India, the country’s IT ministry gave xAI 72 hours to submit a report detailing how it plans to stop the spread of obscene and sexually explicit material generated by Grok. Grok has also warned publicly that xAI could face potential probes from the Department of Justice or lawsuits tied to these failures.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

Advertisement

Researchers later found Grok was widely used to create nonconsensual, sexually altered images of real women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)

Concerns grow over Grok’s safety and government use

The incident raises serious concerns about online privacy, platform security and the safeguards designed to protect minors.

Elon Musk, the owner of X and founder of xAI, had not offered a public response at the time of publication. That silence comes at a sensitive time. Grok has been authorized for official government use under an 18-month federal contract. This approval was granted despite objections from more than 30 consumer advocacy groups that warned the system lacked proper safety testing.

Over the past year, Grok has been accused by critics of spreading misinformation about major news events, promoting antisemitic rhetoric and sharing misleading health information. It also competed directly with tools like ChatGPT and Gemini while operating with fewer visible safety restrictions. Each controversy raises the same question. Can a powerful AI tool be deployed responsibly without strong oversight and enforcement?

What parents and users should know

If you encounter sexualized images of minors or other abusive material online, report it immediately. In the United States, you can contact the FBI tip line or seek help from the National Center for Missing & Exploited Children.

Advertisement

Do not download, share, screenshot or interact with the content in any way. Even viewing or forwarding illegal material can expose you to serious legal risk.

Parents should also talk with children and teens about AI image tools and social media prompts. Many of these images are created through casual requests that do not feel dangerous at first. Teaching kids to report content, close the app and tell a trusted adult can stop harm from spreading further.

Platforms may fail. Safeguards may lag. But early reporting and clear conversations at home remain one of the most effective ways to protect children online.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com       

Advertisement

Kurt’s key takeaways

The Grok scandal highlights a dangerous reality. As AI spreads faster, these systems amplify harm at an unprecedented scale. When safeguards fail, real people suffer, and children face serious risk. At the same time, trust cannot depend on apologies issued after harm occurs. Instead, companies must earn trust through strong safety design, constant monitoring and real accountability when problems emerge.

Should any AI system be approved for government or mass public use before it proves it can reliably protect children and prevent abuse? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Advertisement

Copyright 2025 CyberGuy.com.  All rights reserved.

Continue Reading

Technology

Google pulls AI overviews for some medical searches

Published

on

Google pulls AI overviews for some medical searches

In one case that experts described as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.

In another “alarming” example, the company provided bogus information about crucial liver function tests, which could leave people with serious liver disease wrongly thinking they are healthy.

Continue Reading

Trending