Connect with us

Technology

How hackers can send text messages from your phone without you knowing

Published

on

How hackers can send text messages from your phone without you knowing

There is enough to worry about in life without the additional stress and terror of finding out your friends, family or complete strangers have been receiving a text message from “you” without your knowledge. How did they do that? How did they send a text message from your phone without you knowing?

This is a real threat that many people face every day. That’s why we felt it was so important to answer this question sent in from John.

“I just found a text written to me, which was a response to a text I sent. Problem is, I didn’t send the text? I’m 65 years old, and not as spry as I once was, but I do not remember sending the text. My wife is trying to convince me I’m going crazy. She says it’s impossible for someone to send a text (impersonating me) without having possession of my phone. Is that true? Can someone hack your phone and send text??” – John, Fort Myers, FL

CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO’S TO MAKE YOU SMARTER

What is SMS spoofing?

We’re sorry to hear that you’re going through this, John. It is possible for someone to send a text message impersonating you without having possession of your phone. This is known as SMS spoofing, and it is a technique used by cybercriminals to send fraudulent text messages. 

How does SMS spoofing work?

SMS spoofing works by manipulating the sender ID of a text message to make it appear as if it was sent from a different phone number. This can be done using various online services that allow users to send text messages with a fake sender ID. Cybercriminals will change the sender ID to impersonate friends, family, or a legitimate company.

Advertisement

Example of text message screenshot of hacker pretending to be a bank (Kurt “CyberGuy” Knutsson)

It is important to note that SMS spoofing is illegal and can be used for malicious purposes such as phishing scams, identity theft, and fraud. Scammers bank on the combination of familiarity and urgency to get you to interact with their text either by clicking on a link, downloading a file, or responding with personal information.

How to spot and avoid SMS spoofing scams

Here are the top 3 reasons why scammers often send text messages under a fake sender ID with some urgent request:

1. Trick you into clicking on a malicious link that leads you to a malicious website to rob you of your personal or financial information or even unleash malware or viruses to your phone.

Screenshot of text SMS spoof trying to trick you to click a malicious link (Kurt “CyberGuy” Knutsson)

Advertisement

2. Lure you into paying a fake bill under the guise of a reputable or familiar company.

Screenshot of text spoof trying to trick you to pay fake bill (Kurt “CyberGuy” Knutsson)

3. Damages your reputation or relationship with friends, family, and others by sending harmful messages.

Screenshot of text spoof trying to damage your reputation (Kurt “CyberGuy” Knutsson)

iMessage Vulnerabilities

SMS spoof on Apple device from hacker posing as financial institution (Kurt “CyberGuy” Knutsson)

Advertisement

In the past, many Apple devices were considered to be virtually immune to viruses and malware. Unfortunately, due to bugs in iOS, hackers can take over someone’s device just like any other device on the market. While Apple patches these vulnerabilities on a consistent basis, this leaves iPhone users vulnerable to SMS spoofing, too.

A hacker can use “interaction-less” bugs to send a specially crafted SMS message and the iMessage server can send user-specific data, including images or SMS messages, back to them. The user doesn’t even have to open the messages to activate this bug. Additionally, hackers can send malicious codes through texts, embedding them onto the user’s phone. These vulnerabilities are unique to Apple devices.

Aside from the specific vulnerabilities, hackers generally need the user to interact with the text message before the malicious code gets unleashed onto the device.

MORE: CHECK AND DETECT IF SOMEONE YOU KNOW IS SNOOPING ON YOUR IPHONE

7 Actions to take if you suspect SMS spoofing

If you suspect that your phone has been hacked or that someone is impersonating you, it is important to take immediate action. Here are some steps you can take:

Advertisement

1) Have good antivirus software on your phone: Having good antivirus software actively running on your devices will alert you of any malware in your system, warn you against clicking on any malicious links that may install malware on your devices, allowing hackers to gain access to your personal information. Find my review of Best Antivirus Protection here.

2) Keep your phone software updated: Both iPhone and Android users should keep their phone’s OS and apps updated regularly as Apple and Google release patches to vulnerabilities as they are discovered. Updating your phones can prevent hackers from exploiting security flaws and sending text messages from your phone without you knowing.

3) Change your passwords: Change the passwords for all your online accounts, including your email, social media, and banking accounts. Do not use easy-to-guess information such as your birthday or address. Use strong, unique passwords that are difficult to guess; preferably ones that are alphanumeric and, if applicable, include special symbols. Be sure to do this on another device in case there is malware on your phone monitoring you. Consider using a password manager to generate and store complex passwords. It will help you to create unique and difficult-to-crack passwords that a hacker could never guess.

4) Enable two-factor authentication: Enabling two-factor authentication on all your online accounts will add an extra layer of security to your accounts and make it more difficult for hackers to gain access.

5) Contact your mobile carrier: Contact your mobile carrier and report the incident. They may be able to help you identify the source of the text message and take appropriate action.

Advertisement

6) File a police report: If you believe that you have been a victim of identity theft or fraud, file a police report with your local law enforcement agency.

Fraud detection text message alert on iPhone (Kurt “CyberGuy” Knutsson)

7) Watch your connections: When possible, do not connect to unprotected or public Wi-Fi hotspots or Bluetooth connections. Turn off the Bluetooth connection when not in use. On most iPhones, you can choose who to receive files or photos via AirDrop (a Bluetooth feature) from by selecting to receive from “no one,” people in your Contacts, or Everyone. We suggest you set it to “no one” and only turn it on when you are with the person you are sending or receiving a file or photo from.

MORE: GUARD YOUR PASSWORDS: CHERRYBLOS; FAKETRADE MALWARE THREATEN ANDROIDS

I’ve been scammed by SMS spoofing. What to do next?

Below are some next steps if you find you or your loved one is a victim of identity theft from an SMS spoofing attack.

Advertisement

1) Change your passwords. If you suspect that your phone has been hacked or that someone is impersonating you, they could access your online accounts and steal your data or money. ON ANOTHER DEVICE (i.e., your laptop or desktop), you should change your passwords for all your important accounts, such as email, banking, social media, etc. You want to do this on another device so the hacker isn’t’ recording you setting up your new password on your hacked device. Use strong and unique passwords that are hard to guess or crack. You can also consider using a password manager to generate and store your passwords securely.

2) Look through bank statements and check account transactions to see where outlier activity started.

3) Use a fraud protection service. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals.

Some of the best parts of using an identity theft protection service include identity theft insurance to cover losses and legal fees and a white glove fraud resolution team where a U.S.-based case manager helps you recover any losses. See my tips and best picks on how to protect yourself from identity theft. 

4) Report any breaches to official government agencies like the Federal Communications Commission.

Advertisement

5) You may wish to get the professional advice of a lawyer before speaking to law enforcement, especially when you are dealing with criminal identity theft, and if being a victim of criminal identity theft leaves you unable to secure employment or housing

6) Alert all three major credit bureaus and possibly place a fraud alert on your credit report.

7) Run your own background check or request a copy of one if that is how you discovered your information has been used by a criminal. 

8) Alert your contacts. If hackers have accessed your device through SMS spoofing, they could use it to send spam or phishing messages to your contacts. They could impersonate you and ask for money or personal information. You should alert your contacts and warn them not to open or respond to any messages from you that seem suspicious or unusual.

9) Restore your device to factory settings. If you want to make sure that your device is completely free of any malware or spyware, you can restore it to factory settings. This will erase all your data and settings and reinstall the original version. You should back up your important data BEFORE doing this, and only restore it from a trusted source.

Advertisement

 If you are a victim of identity theft, the most important thing to do is to take immediate action to mitigate the damage and prevent further harm.

MORE: HOW TO TELL IF SOMEONE HAS READ YOUR TEXT MESSAGE

Kurt’s key takeaways

It’s possible for someone who doesn’t have physical possession of your phone to spoof your information for SMS spoofing. Though you might not have control over who gets your number, there are steps you can take to protect yourself.

Have you ever received a convincing text spoof message? What were the telltale signs that it was a spoofed message? Let us know by writing us at Cyberguy.com/Contact.

For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Advertisement

Ask Kurt a question or let us know what stories you’d like us to cover.

Answers to the most asked CyberGuy questions:

Ideas for using those Holiday Gift cards

Copyright 2024 CyberGuy.com. All rights reserved.

Advertisement

Technology

You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd

Published

on

You need to listen to Billy Woods’ horrorcore masterpiece for the A24 crowd

Billy Woods has one of the highest batting averages in the game. Between his solo records like Hiding Places and Maps, and his collaborative albums with Elucid as Armand Hammer, the man has multiple stone-cold classics under his belt. And, while no one would ever claim that Woods’ albums were light-hearted fare (these are not party records), Golliwog represents his darkest to date.

This is not your typical horrorcore record. Others, like Geto Boys, Gravediggaz, and Insane Clown Posse, reach for slasher aesthetics and shock tactics. But what Billy Woods has crafted is more A24 than Blumhouse.

Sure, the first track is called “Jumpscare,” and it opens with the sound of a film reel spinning up, followed by a creepy music box and the line: “Ragdoll playing dead. Rabid dog in the yard, car won’t start, it’s bees in your head.” It’s setting you up for the typical horror flick gimmickry. But by the end, it’s psychological torture. A cacophony of voices forms a bed for unidentifiable screeching noises, and Woods drops what feels like a mission statement:

“The English language is violence, I hotwired it. I got a hold of the master’s tools and got dialed in.”

Throughout the record, Woods turns to his producers to craft not cheap scares, but tension, to make the listener feel uneasy. “Waterproof Mascara” turns a woman’s sobs into a rhythmic motif. On “Pitchforks & Halos” Kenny Segal conjures the aural equivalent of a POV shot of a serial killer. And “All These Worlds are Yours” produced by DJ Haram has more in common with the early industrial of Throbbing Gristle than it does even some of the other tracks on the record, like “Golgotha” which pairs boombap drums with New Orleans funeral horns.

That dense, at times scattered production is paired with lines that juxtapose the real-world horrors of oppression and colonialism, with scenes that feel taken straight from Bring Her Back: “Trapped a housefly in an upside-down pint glass and waited for it to die.” And later, Woods seamlessly transitions from boasting to warning people about turning their backs on the genocide in Gaza on “Corinthians”:

Advertisement

If you never came back from the dead you can’t tell me shit
Twelve billion USD hovering over the Gaza Strip
You don’t wanna know what it cost to live
What it cost to hide behind eyelids
When your back turnt, secret cannibals lick they lips

The record features some of Woods’ deftest lyricism, balancing confrontation with philosophy, horror with emotion. Billy Woods’ Golliwog is available on Bandcamp and on most major streaming services, including Apple Music, Qobuz, Deezer, YouTube Music, and Spotify.

Continue Reading

Technology

Grok AI scandal sparks global alarm over child safety

Published

on

Grok AI scandal sparks global alarm over child safety

NEWYou can now listen to Fox News articles!

Grok, the built-in chatbot on X, is facing intense scrutiny after acknowledging it generated and shared an AI image depicting two young girls in sexualized attire.

In a public post on X, Grok admitted the content “violated ethical standards” and “potentially U.S. laws on child sexual abuse material (CSAM).” The chatbot added, “It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”

That admission alone is alarming. What followed revealed a far broader pattern.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

The fallout from this incident has triggered global scrutiny, with governments and safety groups questioning whether AI platforms are doing enough to protect children.  (Silas Stein/picture alliance via Getty Images)

Grok quietly restricts image tools to paying users after backlash

As criticism mounted, Grok confirmed it has begun limiting image generation and editing features to paying subscribers only. In a late-night reply on X, the chatbot stated that image tools are now locked behind a premium subscription, directing users to sign up to regain access.

The apology that raised more questions

Grok’s apology appeared only after a user prompted the chatbot to write a heartfelt explanation for people lacking context. In other words, the system did not proactively address the issue. It responded because someone asked it to.

Around the same time, researchers and journalists uncovered widespread misuse of Grok’s image tools. According to monitoring firm Copyleaks, users were generating nonconsensual, sexually manipulated images of real women, including minors and well-known figures.

Advertisement

After reviewing Grok’s publicly accessible photo feed, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute, based on images involving real people with no clear indication of consent. The firm says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.

Copyleaks CEO and co-founder Alon Yamin said, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”

PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS

Grok admitted it generated and shared an AI image that violated ethical standards and may have broken U.S. child protection laws. (Kurt “CyberGuy” Knutsson)

Sexualized images of minors are illegal

This is not a gray area. Generating or distributing sexualized images of minors is a serious criminal offense in the United States and many other countries. Under U.S. federal law, such content is classified as child sexual abuse material. Penalties can include five to 20 years in prison, fines up to $250,000 and mandatory sex offender registration. Similar laws apply in the U.K. and France.

Advertisement

In 2024, a Pennsylvania man received nearly eight years in prison for creating and possessing deepfake CSAM involving child celebrities. That case set a clear precedent. Grok itself acknowledged this legal reality in its post, stating that AI images depicting minors in sexualized contexts are illegal.

The scale of the problem is growing fast

A July report from the Internet Watch Foundation, a nonprofit that tracks and removes child sexual abuse material online, shows how quickly this threat is accelerating. Reports of AI-generated child sexual abuse imagery jumped by 400% in the first half of 2025 alone. Experts warn that AI tools lower the barrier to potential abuse. What once required technical skill or access to hidden forums can now happen through a simple prompt on a mainstream platform.

Real people are being targeted

The harm is not abstract. Reuters documented cases where users asked Grok to digitally undress real women whose photos were posted on X. In multiple documented cases, Grok fully complied. Even more disturbing, users targeted images of a 14-year-old actress Nell Fisher from the Netflix series “Stranger Things.” Grok later admitted there were isolated cases in which users received images depicting minors in minimal clothing. In another Reuters investigation, a Brazilian musician described watching AI-generated bikini images of herself spread across X after users prompted Grok to alter a harmless photo. Her experience mirrors what many women and girls are now facing.

Governments respond worldwide

The backlash has gone global. In France, multiple ministers referred X to an investigative agency over possible violations of the EU’s Digital Services Act, which requires platforms to prevent and mitigate the spread of illegal content. Violations can trigger heavy fines. In India, the country’s IT ministry gave xAI 72 hours to submit a report detailing how it plans to stop the spread of obscene and sexually explicit material generated by Grok. Grok has also warned publicly that xAI could face potential probes from the Department of Justice or lawsuits tied to these failures.

LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

Advertisement

Researchers later found Grok was widely used to create nonconsensual, sexually altered images of real women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)

Concerns grow over Grok’s safety and government use

The incident raises serious concerns about online privacy, platform security and the safeguards designed to protect minors.

Elon Musk, the owner of X and founder of xAI, had not offered a public response at the time of publication. That silence comes at a sensitive time. Grok has been authorized for official government use under an 18-month federal contract. This approval was granted despite objections from more than 30 consumer advocacy groups that warned the system lacked proper safety testing.

Over the past year, Grok has been accused by critics of spreading misinformation about major news events, promoting antisemitic rhetoric and sharing misleading health information. It also competed directly with tools like ChatGPT and Gemini while operating with fewer visible safety restrictions. Each controversy raises the same question. Can a powerful AI tool be deployed responsibly without strong oversight and enforcement?

What parents and users should know

If you encounter sexualized images of minors or other abusive material online, report it immediately. In the United States, you can contact the FBI tip line or seek help from the National Center for Missing & Exploited Children.

Advertisement

Do not download, share, screenshot or interact with the content in any way. Even viewing or forwarding illegal material can expose you to serious legal risk.

Parents should also talk with children and teens about AI image tools and social media prompts. Many of these images are created through casual requests that do not feel dangerous at first. Teaching kids to report content, close the app and tell a trusted adult can stop harm from spreading further.

Platforms may fail. Safeguards may lag. But early reporting and clear conversations at home remain one of the most effective ways to protect children online.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com       

Advertisement

Kurt’s key takeaways

The Grok scandal highlights a dangerous reality. As AI spreads faster, these systems amplify harm at an unprecedented scale. When safeguards fail, real people suffer, and children face serious risk. At the same time, trust cannot depend on apologies issued after harm occurs. Instead, companies must earn trust through strong safety design, constant monitoring and real accountability when problems emerge.

Should any AI system be approved for government or mass public use before it proves it can reliably protect children and prevent abuse? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Advertisement

Copyright 2025 CyberGuy.com.  All rights reserved.

Continue Reading

Technology

Google pulls AI overviews for some medical searches

Published

on

Google pulls AI overviews for some medical searches

In one case that experts described as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.

In another “alarming” example, the company provided bogus information about crucial liver function tests, which could leave people with serious liver disease wrongly thinking they are healthy.

Continue Reading
Advertisement

Trending