Technology
Scammers target retirees with election tricks and fake polling updates ahead of Nov 4 vote
NEWYou can now listen to Fox News articles!
Election season should be about casting your vote and making your voice heard. But for scammers, it’s an opportunity to trick retirees into handing over personal details, money or even their vote itself.
What many don’t realize is that public voter registration data is one of the biggest tools fraudsters use. With elections coming up on Nov. 4, scammers are already scraping these records and using them to create targeted scams. If you’re a retiree or helping a parent or loved one prepare to vote, here’s how to stay safe.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
Why voter records are public and risky
HOW SCAMMERS TARGET YOU EVEN WITHOUT SOCIAL MEDIA
Every state in the U.S. keeps voter registration lists. These include personal details like:
- Full name
- Home address
- Phone number (in some states)
- Political party affiliation
- Voting history (whether you voted, not who you voted for).
Scammers are targeting retirees with fake election messages and calls. (Getty Images)
While these lists are meant for transparency, they’re often made available online or sold in bulk. Data brokers scoop them up, combine them with other records and suddenly scammers have a detailed profile of you: your age, address and voting habits. For retirees, this exposure is especially dangerous. Why? Because seniors are less likely to know that this information is floating around, making scams seem more convincing.
You can easily check where your personal information is exposed with a free data exposure scanner.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com
Scams targeting retirees before Nov. 4
Here are the most common election-season cons fraudsters are already running:
1) Fake “polling place” updates
You might get a call, text or email saying your polling location has changed. Scammers may then direct you to a fake site that asks for your Social Security number or ID details “to confirm eligibility.”
2) “Voter ID update” messages
Since some states require voter ID, scammers will pose as election officials, claiming your ID is “out of date” or that you must upload personal documents. These go straight into the wrong hands.
RETIREES LOSE MILLIONS TO FAKE HOLIDAY CHARITIES AS SCAMMERS EXPLOIT SEASONAL GENEROSITY
3) Donation scams
Criminals set up fake political donation sites with names resembling real campaigns. Retirees who are politically active or generous with causes are prime targets here.
4) Absentee ballot phishing
Scammers know many seniors vote by mail. They’ll send emails offering to “help” with requests or track your ballot while stealing your personal data in the process.
Red flags to watch out for
Public voter data can make it easy for fraudsters to create convincing scams. (CyberGuy.com)
Scammers use clever tricks to make their messages seem urgent and official. Here are the warning signs that should make you pause before responding.
- Urgency: “Act now or lose your right to vote.” Scammers use deadlines to scare you.
- Unusual payment requests: No legitimate election office will ever ask for payment to vote or register.
- Strange links: If you’re asked to click on a link from a text or email, stop. Always go directly to your state’s official election website instead.
- Requests for sensitive info: Election officials don’t need your Social Security number or bank account details.
How retirees can stay safe this election season
Protecting yourself doesn’t mean opting out of civic life. It means taking a few smart steps:
1) Reduce your data footprint
This one matters most. The less personal data available about you, the fewer opportunities scammers have to trick you during election season. When they can view your age, address and even your voting history, they can craft messages that sound alarmingly real. The good news is you can take control and limit what’s out there.
Reaching every voter data broker or people-search site on your own is nearly impossible, and most make the process intentionally difficult. That’s why data removal services can help. They automatically send removal requests to hundreds of data-broker sites and keep monitoring to ensure your information doesn’t return. The result is fewer scam calls, fewer phishing emails and far less risk this election season.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
REMOVE YOUR DATA TO PROTECT YOUR RETIREMENT FROM SCAMMERS
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com
2) Confirm only through official sources
If you get a message about your polling place, ignore any links and call your local election office directly. Each state also has an official website you can trust.
3) Sign up for ballot tracking
Many states offer secure ballot tracking online. Use only the official election site, not third-party services.
4) Freeze your credit
Since scammers use voter data to impersonate you, a credit freeze stops them from opening new accounts in your name. Retirees who don’t need frequent new credit are especially good candidates for this protection.
Taking steps to remove your personal info online helps keep your vote and data safe. (Kurt “CyberGuy” Knutsson)
5) Be wary of political donation sites
If you want to donate, type the campaign’s official website into your browser instead of clicking a link in an email or social media ad.
Kurt’s key takeaway
Voting is one of the most important rights we have. But this year, scammers will use public voter data to exploit retirees like never before. Don’t let them steal your peace of mind. By spotting the red flags, sticking to official election sources and removing your personal data from the web, you can protect yourself and your vote.
Have you or someone you know received a suspicious message about voting or donations? How did you realize or suspect that it was a scam? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd
Billy Woods has one of the highest batting averages in the game. Between his solo records like Hiding Places and Maps, and his collaborative albums with Elucid as Armand Hammer, the man has multiple stone-cold classics under his belt. And, while no one would ever claim that Woods’ albums were light-hearted fare (these are not party records), Golliwog represents his darkest to date.
This is not your typical horrorcore record. Others, like Geto Boys, Gravediggaz, and Insane Clown Posse, reach for slasher aesthetics and shock tactics. But what Billy Woods has crafted is more A24 than Blumhouse.
Sure, the first track is called “Jumpscare,” and it opens with the sound of a film reel spinning up, followed by a creepy music box and the line: “Ragdoll playing dead. Rabid dog in the yard, car won’t start, it’s bees in your head.” It’s setting you up for the typical horror flick gimmickry. But by the end, it’s psychological torture. A cacophony of voices forms a bed for unidentifiable screeching noises, and Woods drops what feels like a mission statement:
“The English language is violence, I hotwired it. I got a hold of the master’s tools and got dialed in.”
Throughout the record, Woods turns to his producers to craft not cheap scares, but tension, to make the listener feel uneasy. “Waterproof Mascara” turns a woman’s sobs into a rhythmic motif. On “Pitchforks & Halos” Kenny Segal conjures the aural equivalent of a POV shot of a serial killer. And “All These Worlds are Yours” produced by DJ Haram has more in common with the early industrial of Throbbing Gristle than it does even some of the other tracks on the record, like “Golgotha” which pairs boombap drums with New Orleans funeral horns.
That dense, at times scattered production is paired with lines that juxtapose the real-world horrors of oppression and colonialism, with scenes that feel taken straight from Bring Her Back: “Trapped a housefly in an upside-down pint glass and waited for it to die.” And later, Woods seamlessly transitions from boasting to warning people about turning their backs on the genocide in Gaza on “Corinthians”:
If you never came back from the dead you can’t tell me shit
Twelve billion USD hovering over the Gaza Strip
You don’t wanna know what it cost to live
What it cost to hide behind eyelids
When your back turnt, secret cannibals lick they lips
The record features some of Woods’ deftest lyricism, balancing confrontation with philosophy, horror with emotion. Billy Woods’ Golliwog is available on Bandcamp and on most major streaming services, including Apple Music, Qobuz, Deezer, YouTube Music, and Spotify.
Technology
Grok AI scandal sparks global alarm over child safety
NEWYou can now listen to Fox News articles!
Grok, the built-in chatbot on X, is facing intense scrutiny after acknowledging it generated and shared an AI image depicting two young girls in sexualized attire.
In a public post on X, Grok admitted the content “violated ethical standards” and “potentially U.S. laws on child sexual abuse material (CSAM).” The chatbot added, “It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”
That admission alone is alarming. What followed revealed a far broader pattern.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN
The fallout from this incident has triggered global scrutiny, with governments and safety groups questioning whether AI platforms are doing enough to protect children. (Silas Stein/picture alliance via Getty Images)
Grok quietly restricts image tools to paying users after backlash
As criticism mounted, Grok confirmed it has begun limiting image generation and editing features to paying subscribers only. In a late-night reply on X, the chatbot stated that image tools are now locked behind a premium subscription, directing users to sign up to regain access.
The apology that raised more questions
Grok’s apology appeared only after a user prompted the chatbot to write a heartfelt explanation for people lacking context. In other words, the system did not proactively address the issue. It responded because someone asked it to.
Around the same time, researchers and journalists uncovered widespread misuse of Grok’s image tools. According to monitoring firm Copyleaks, users were generating nonconsensual, sexually manipulated images of real women, including minors and well-known figures.
After reviewing Grok’s publicly accessible photo feed, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute, based on images involving real people with no clear indication of consent. The firm says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.
Copyleaks CEO and co-founder Alon Yamin said, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”
PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS
Grok admitted it generated and shared an AI image that violated ethical standards and may have broken U.S. child protection laws. (Kurt “CyberGuy” Knutsson)
Sexualized images of minors are illegal
This is not a gray area. Generating or distributing sexualized images of minors is a serious criminal offense in the United States and many other countries. Under U.S. federal law, such content is classified as child sexual abuse material. Penalties can include five to 20 years in prison, fines up to $250,000 and mandatory sex offender registration. Similar laws apply in the U.K. and France.
In 2024, a Pennsylvania man received nearly eight years in prison for creating and possessing deepfake CSAM involving child celebrities. That case set a clear precedent. Grok itself acknowledged this legal reality in its post, stating that AI images depicting minors in sexualized contexts are illegal.
The scale of the problem is growing fast
A July report from the Internet Watch Foundation, a nonprofit that tracks and removes child sexual abuse material online, shows how quickly this threat is accelerating. Reports of AI-generated child sexual abuse imagery jumped by 400% in the first half of 2025 alone. Experts warn that AI tools lower the barrier to potential abuse. What once required technical skill or access to hidden forums can now happen through a simple prompt on a mainstream platform.
Real people are being targeted
The harm is not abstract. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X. In multiple documented cases, Grok fully complied. Even more disturbing, users targeted images of a 14-year-old actress Nell Fisher from the Netflix series “Stranger Things.” Grok later admitted there were isolated cases in which users received images depicting minors in minimal clothing. In another Reuters investigation, a Brazilian musician described watching AI-generated bikini images of herself spread across X after users prompted Grok to alter a harmless photo. Her experience mirrors what many women and girls are now facing.
Governments respond worldwide
The backlash has gone global. In France, multiple ministers referred X to an investigative agency over possible violations of the EU’s Digital Services Act, which requires platforms to prevent and mitigate the spread of illegal content. Violations can trigger heavy fines. In India, the country’s IT ministry gave xAI 72 hours to submit a report detailing how it plans to stop the spread of obscene and sexually explicit material generated by Grok. Grok has also warned publicly that xAI could face potential probes from the Department of Justice or lawsuits tied to these failures.
LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION
Researchers later found Grok was widely used to create nonconsensual, sexually altered images of real women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)
Concerns grow over Grok’s safety and government use
The incident raises serious concerns about online privacy, platform security and the safeguards designed to protect minors.
Elon Musk, the owner of X and founder of xAI, had not offered a public response at the time of publication. That silence comes at a sensitive time. Grok has been authorized for official government use under an 18-month federal contract. This approval was granted despite objections from more than 30 consumer advocacy groups that warned the system lacked proper safety testing.
Over the past year, Grok has been accused by critics of spreading misinformation about major news events, promoting antisemitic rhetoric and sharing misleading health information. It also competed directly with tools like ChatGPT and Gemini while operating with fewer visible safety restrictions. Each controversy raises the same question. Can a powerful AI tool be deployed responsibly without strong oversight and enforcement?
What parents and users should know
If you encounter sexualized images of minors or other abusive material online, report it immediately. In the United States, you can contact the FBI tip line or seek help from the National Center for Missing & Exploited Children.
Do not download, share, screenshot or interact with the content in any way. Even viewing or forwarding illegal material can expose you to serious legal risk.
Parents should also talk with children and teens about AI image tools and social media prompts. Many of these images are created through casual requests that do not feel dangerous at first. Teaching kids to report content, close the app and tell a trusted adult can stop harm from spreading further.
Platforms may fail. Safeguards may lag. But early reporting and clear conversations at home remain one of the most effective ways to protect children online.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
The Grok scandal highlights a dangerous reality. As AI spreads faster, these systems amplify harm at an unprecedented scale. When safeguards fail, real people suffer, and children face serious risk. At the same time, trust cannot depend on apologies issued after harm occurs. Instead, companies must earn trust through strong safety design, constant monitoring and real accountability when problems emerge.
Should any AI system be approved for government or mass public use before it proves it can reliably protect children and prevent abuse? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Google pulls AI overviews for some medical searches
In one case that experts described as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.
In another “alarming” example, the company provided bogus information about crucial liver function tests, which could leave people with serious liver disease wrongly thinking they are healthy.
-
Detroit, MI1 week ago2 hospitalized after shooting on Lodge Freeway in Detroit
-
Technology6 days agoPower bank feature creep is out of control
-
Dallas, TX3 days agoAnti-ICE protest outside Dallas City Hall follows deadly shooting in Minneapolis
-
Delaware3 days agoMERR responds to dead humpback whale washed up near Bethany Beach
-
Dallas, TX7 days agoDefensive coordinator candidates who could improve Cowboys’ brutal secondary in 2026
-
Iowa6 days agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star
-
Montana2 days agoService door of Crans-Montana bar where 40 died in fire was locked from inside, owner says
-
Health1 week agoViral New Year reset routine is helping people adopt healthier habits