Technology
Hyundai AutoEver America breached: Know the risks to you
NEWYou can now listen to Fox News articles!
Hyundai AutoEver America discovered on March 1, 2025, that hackers had compromised its systems. Investigators found the intrusion began on February 22 and continued until March 2.
Hyundai AutoEver America (HAEA) provides IT services for Hyundai Motor America, including systems that support employee operations and certain connected-vehicle technologies. While the company works across Hyundai’s broader ecosystem, this incident did not involve customer or driver data.
According to the statement provided to CyberGuy, the breach was limited to employment-related information tied to Hyundai AutoEver America and Hyundai Motor America. The company confirmed that about 2,000 current and former employees were notified of the incident in late October. HAEA said it immediately alerted law enforcement and hired outside cybersecurity experts to assess the damage.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Cybercriminals targeted Hyundai AutoEver America’s systems, exposing sensitive data. (Kurt “CyberGuy” Knutsson)
Why this Hyundai AutoEver America breach matters
The exposed data reportedly includes names, Social Security numbers and driver’s license numbers, making this breach far more serious than one involving passwords alone. Experts warn that these details can be used for long-term identity theft and financial fraud. Because Social Security numbers cannot easily be changed, criminals have more time to create fake identities, open fraudulent accounts and launch targeted phishing attacks long after the initial breach.
Experts warn that stolen Social Security and driver’s license information could be used for identity theft and fraud. (Kurt “CyberGuy” Knutsson)
Who was affected in the Hyundai AutoEver America data incident
AEA manages select IT systems tied to Hyundai Motor America’s employee operations, along with broader technology functions for Hyundai and Genesis across North America. Its role includes supporting connected-vehicle infrastructure and dealership systems.
According to the company, this incident was limited to employment-related data and primarily affected approximately 2,000 current and former employees of Hyundai AutoEver America and Hyundai Motor America. No customer information or Bluelink driver details were exposed. While some filings reference sensitive data types such as Social Security numbers or driver’s license information, the incident did not involve Hyundai customers or the millions of connected vehicles HAEA supports.
Earlier reports suggested that 2.7 million individuals were affected, but Hyundai says that figure is unrelated to the breach. Instead, 2.7 million is the estimated number of connected vehicles that Hyundai AutoEver America helps support across North America. None of that consumer or vehicle data was accessed.
GENESIS PREVIEWS G70 SPORTS SEDAN WITH NEW YORK CONCEPT
Hyundai also clarified that the United States has about 850 Hyundai dealerships and emphasized that the scope of this incident was narrow and contained.
We reached out to HAEA for a comment, and a representative for the company provided CyberGuy with this statement:
“Hyundai AutoEver America, an IT vendor that manages certain Hyundai Motor America employee data systems, experienced an incident to that area of business that impacted employment-related data and primarily affected current and former employees of Hyundai AutoEver America and Hyundai Motor America. Approximately 2,000 primarily current and former employees were notified of the incident. The 2.7 million figure that is cited in many media articles has no relation to the actual security incident. The 2.7 million figure represents the alleged total number of connected vehicles that may be supported by Hyundai AutoEver America across North America. No Hyundai consumer data was exposed, and no Hyundai Motor America customer information or Bluelink driver data was compromised.”
Scammers may now pose as company representatives, contacting people to steal more personal details. (Kurt “CyberGuy” Knutsson)
What you should do right now
- Monitor your bank, credit card and vehicle-related accounts for suspicious activity.
- Check for a notification letter from Hyundai AutoEver America or your car brand.
- Enroll in the two years of complimentary credit monitoring offered by HAEA if you qualify.
- Enable multi-factor authentication (MFA) on all important accounts, including those tied to your vehicle.
- Be cautious of emails, texts or calls claiming to be from Hyundai, Kia or Genesis. Always verify through official websites.
Smart ways to stay safe after the Hyundai AutoEver America breach
Whether you were directly affected or just want to stay alert, this breach is a reminder of how important it is to protect your personal information. Follow these practical steps to keep your data secure and reduce the risk of identity theft or scams.
HYUNDAI TO RECALL GENESIS CARS TO FIX BRAKES
1) Freeze or alert your credit
Contact major credit bureaus — Experian, TransUnion and Equifax — to set a fraud alert or freeze. This helps block new accounts from being opened in your name.
2) Protect your vehicle apps
If you use apps tied to your vehicle, update passwords and enable multi-factor authentication. Avoid saving login details in unsecured places. Also, consider using a password manager, which securely stores and generates complex passwords, reducing the risk of password reuse.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com
3) Watch for fake support messages
Scammers may use news of the Hyundai AutoEver America breach as a way to contact Hyundai, Kia or Genesis owners, pretending to be from customer support or the dealership. They might claim to help verify your account, update your information or fix a security issue. Do not share personal details or click any links. Type the brand’s web address directly into your browser instead of clicking links in messages or emails. Always confirm through the official brand website or by calling the verified customer service number.
4) Use strong antivirus protection
Using strong antivirus software helps block phishing links, malware downloads and fake websites that might appear after a data breach. It can also scan your devices for hidden threats that may try to steal login data or personal files.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.
5) Use a data removal service
Data removal tools automatically find and delete your personal information from people-search and data-broker sites. These services reduce the chances that criminals will use leaked data to target you with phishing or social-engineering scams.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
6) Monitor your digital footprint
Consider using identity monitoring services to track your personal information and detect possible misuse early.
Identity Theft companies can monitor personal information like your Social Security number (SSN), phone number and email address, and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals.
See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.
7) Keep your devices updated
Regularly install security updates on your phone, laptop and smart car systems to reduce the risk of further attacks.
8) Report suspicious activity the right way
If you notice unusual account activity, fraudulent charges, or suspicious messages that appear tied to this breach, report it immediately. Start by contacting your bank or credit card provider to freeze or dispute any unauthorized transactions. Then, file a report with the Federal Trade Commission (FTC) at IdentityTheft.gov, where you can create an official recovery plan. If you suspect a scam message or call, forward phishing emails to reportphishing@apwg.org and report fake texts to 7726 (SPAM).
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
This incident highlights how much personal data is connected to modern cars and how vulnerable those systems can be. When your vehicle is linked to your identity, protecting your data becomes just as important as maintaining the car itself. Stay alert, use the tools available to safeguard your accounts and report any suspicious activity right away.
Should companies like Hyundai AutoEver be doing more to keep customer data secure? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd
Billy Woods has one of the highest batting averages in the game. Between his solo records like Hiding Places and Maps, and his collaborative albums with Elucid as Armand Hammer, the man has multiple stone-cold classics under his belt. And, while no one would ever claim that Woods’ albums were light-hearted fare (these are not party records), Golliwog represents his darkest to date.
This is not your typical horrorcore record. Others, like Geto Boys, Gravediggaz, and Insane Clown Posse, reach for slasher aesthetics and shock tactics. But what Billy Woods has crafted is more A24 than Blumhouse.
Sure, the first track is called “Jumpscare,” and it opens with the sound of a film reel spinning up, followed by a creepy music box and the line: “Ragdoll playing dead. Rabid dog in the yard, car won’t start, it’s bees in your head.” It’s setting you up for the typical horror flick gimmickry. But by the end, it’s psychological torture. A cacophony of voices forms a bed for unidentifiable screeching noises, and Woods drops what feels like a mission statement:
“The English language is violence, I hotwired it. I got a hold of the master’s tools and got dialed in.”
Throughout the record, Woods turns to his producers to craft not cheap scares, but tension, to make the listener feel uneasy. “Waterproof Mascara” turns a woman’s sobs into a rhythmic motif. On “Pitchforks & Halos” Kenny Segal conjures the aural equivalent of a POV shot of a serial killer. And “All These Worlds are Yours” produced by DJ Haram has more in common with the early industrial of Throbbing Gristle than it does even some of the other tracks on the record, like “Golgotha” which pairs boombap drums with New Orleans funeral horns.
That dense, at times scattered production is paired with lines that juxtapose the real-world horrors of oppression and colonialism, with scenes that feel taken straight from Bring Her Back: “Trapped a housefly in an upside-down pint glass and waited for it to die.” And later, Woods seamlessly transitions from boasting to warning people about turning their backs on the genocide in Gaza on “Corinthians”:
If you never came back from the dead you can’t tell me shit
Twelve billion USD hovering over the Gaza Strip
You don’t wanna know what it cost to live
What it cost to hide behind eyelids
When your back turnt, secret cannibals lick they lips
The record features some of Woods’ deftest lyricism, balancing confrontation with philosophy, horror with emotion. Billy Woods’ Golliwog is available on Bandcamp and on most major streaming services, including Apple Music, Qobuz, Deezer, YouTube Music, and Spotify.
Technology
Grok AI scandal sparks global alarm over child safety
NEWYou can now listen to Fox News articles!
Grok, the built-in chatbot on X, is facing intense scrutiny after acknowledging it generated and shared an AI image depicting two young girls in sexualized attire.
In a public post on X, Grok admitted the content “violated ethical standards” and “potentially U.S. laws on child sexual abuse material (CSAM).” The chatbot added, “It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”
That admission alone is alarming. What followed revealed a far broader pattern.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN
The fallout from this incident has triggered global scrutiny, with governments and safety groups questioning whether AI platforms are doing enough to protect children. (Silas Stein/picture alliance via Getty Images)
Grok quietly restricts image tools to paying users after backlash
As criticism mounted, Grok confirmed it has begun limiting image generation and editing features to paying subscribers only. In a late-night reply on X, the chatbot stated that image tools are now locked behind a premium subscription, directing users to sign up to regain access.
The apology that raised more questions
Grok’s apology appeared only after a user prompted the chatbot to write a heartfelt explanation for people lacking context. In other words, the system did not proactively address the issue. It responded because someone asked it to.
Around the same time, researchers and journalists uncovered widespread misuse of Grok’s image tools. According to monitoring firm Copyleaks, users were generating nonconsensual, sexually manipulated images of real women, including minors and well-known figures.
After reviewing Grok’s publicly accessible photo feed, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute, based on images involving real people with no clear indication of consent. The firm says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.
Copyleaks CEO and co-founder Alon Yamin said, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”
PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS
Grok admitted it generated and shared an AI image that violated ethical standards and may have broken U.S. child protection laws. (Kurt “CyberGuy” Knutsson)
Sexualized images of minors are illegal
This is not a gray area. Generating or distributing sexualized images of minors is a serious criminal offense in the United States and many other countries. Under U.S. federal law, such content is classified as child sexual abuse material. Penalties can include five to 20 years in prison, fines up to $250,000 and mandatory sex offender registration. Similar laws apply in the U.K. and France.
In 2024, a Pennsylvania man received nearly eight years in prison for creating and possessing deepfake CSAM involving child celebrities. That case set a clear precedent. Grok itself acknowledged this legal reality in its post, stating that AI images depicting minors in sexualized contexts are illegal.
The scale of the problem is growing fast
A July report from the Internet Watch Foundation, a nonprofit that tracks and removes child sexual abuse material online, shows how quickly this threat is accelerating. Reports of AI-generated child sexual abuse imagery jumped by 400% in the first half of 2025 alone. Experts warn that AI tools lower the barrier to potential abuse. What once required technical skill or access to hidden forums can now happen through a simple prompt on a mainstream platform.
Real people are being targeted
The harm is not abstract. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X. In multiple documented cases, Grok fully complied. Even more disturbing, users targeted images of a 14-year-old actress Nell Fisher from the Netflix series “Stranger Things.” Grok later admitted there were isolated cases in which users received images depicting minors in minimal clothing. In another Reuters investigation, a Brazilian musician described watching AI-generated bikini images of herself spread across X after users prompted Grok to alter a harmless photo. Her experience mirrors what many women and girls are now facing.
Governments respond worldwide
The backlash has gone global. In France, multiple ministers referred X to an investigative agency over possible violations of the EU’s Digital Services Act, which requires platforms to prevent and mitigate the spread of illegal content. Violations can trigger heavy fines. In India, the country’s IT ministry gave xAI 72 hours to submit a report detailing how it plans to stop the spread of obscene and sexually explicit material generated by Grok. Grok has also warned publicly that xAI could face potential probes from the Department of Justice or lawsuits tied to these failures.
LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION
Researchers later found Grok was widely used to create nonconsensual, sexually altered images of real women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)
Concerns grow over Grok’s safety and government use
The incident raises serious concerns about online privacy, platform security and the safeguards designed to protect minors.
Elon Musk, the owner of X and founder of xAI, had not offered a public response at the time of publication. That silence comes at a sensitive time. Grok has been authorized for official government use under an 18-month federal contract. This approval was granted despite objections from more than 30 consumer advocacy groups that warned the system lacked proper safety testing.
Over the past year, Grok has been accused by critics of spreading misinformation about major news events, promoting antisemitic rhetoric and sharing misleading health information. It also competed directly with tools like ChatGPT and Gemini while operating with fewer visible safety restrictions. Each controversy raises the same question. Can a powerful AI tool be deployed responsibly without strong oversight and enforcement?
What parents and users should know
If you encounter sexualized images of minors or other abusive material online, report it immediately. In the United States, you can contact the FBI tip line or seek help from the National Center for Missing & Exploited Children.
Do not download, share, screenshot or interact with the content in any way. Even viewing or forwarding illegal material can expose you to serious legal risk.
Parents should also talk with children and teens about AI image tools and social media prompts. Many of these images are created through casual requests that do not feel dangerous at first. Teaching kids to report content, close the app and tell a trusted adult can stop harm from spreading further.
Platforms may fail. Safeguards may lag. But early reporting and clear conversations at home remain one of the most effective ways to protect children online.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
The Grok scandal highlights a dangerous reality. As AI spreads faster, these systems amplify harm at an unprecedented scale. When safeguards fail, real people suffer, and children face serious risk. At the same time, trust cannot depend on apologies issued after harm occurs. Instead, companies must earn trust through strong safety design, constant monitoring and real accountability when problems emerge.
Should any AI system be approved for government or mass public use before it proves it can reliably protect children and prevent abuse? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Google pulls AI overviews for some medical searches
In one case that experts described as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.
In another “alarming” example, the company provided bogus information about crucial liver function tests, which could leave people with serious liver disease wrongly thinking they are healthy.
-
Detroit, MI1 week ago2 hospitalized after shooting on Lodge Freeway in Detroit
-
Technology6 days agoPower bank feature creep is out of control
-
Dallas, TX4 days agoAnti-ICE protest outside Dallas City Hall follows deadly shooting in Minneapolis
-
Delaware3 days agoMERR responds to dead humpback whale washed up near Bethany Beach
-
Dallas, TX1 week agoDefensive coordinator candidates who could improve Cowboys’ brutal secondary in 2026
-
Iowa6 days agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star
-
Montana2 days agoService door of Crans-Montana bar where 40 died in fire was locked from inside, owner says
-
Health1 week agoViral New Year reset routine is helping people adopt healthier habits