Connect with us

Technology

FBI warns of fake kidnapping photos used in new scam

Published

on

FBI warns of fake kidnapping photos used in new scam

NEWYou can now listen to Fox News articles!

The FBI is warning about a disturbing scam that turns family photos into powerful weapons. Cybercriminals are stealing images from social media accounts, altering them and using them as fake proof of life in virtual kidnapping scams.

These scams do not involve real abductions. Instead, criminals rely on fear, speed and believable images to pressure victims into paying ransom before they can think clearly.

Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter

FACEBOOK SETTLEMENT SCAM EMAILS TO AVOID NOW

Advertisement

Scammers steal photos from public social media accounts and manipulate them to create fake proof of life images that fuel fear and urgency. (Kurt “CyberGuy” Knutsson)

How the fake kidnapping scam works

According to the FBI, scammers usually start with a text message. They claim they have kidnapped a loved one and demand immediate payment for their release. To make the threat feel real, the criminals send an altered photo pulled from social media. The FBI says these images may be sent using timed messages to limit how long victims can examine them. The agency warns that scammers often threaten extreme violence if the ransom is not paid right away. This urgency is designed to shut down rational thinking.

Signs the photo may be fake

When victims slow down and look closely, the altered images often fall apart. The FBI says warning signs may include missing scars or tattoos, strange body proportions or details that do not match reality. Scammers may also spoof a loved one’s phone number, which makes the message feel even more convincing. Reports on sites like Reddit show this tactic is already being used in the real world.

Why this fake kidnapping scam is so effective

Virtual kidnapping scams work because they exploit emotion. Fear pushes people to act fast, especially when the message appears to come from someone they trust. The FBI notes that criminals use publicly available information to personalize their threats. Even posts meant to help others, such as missing person searches, can provide useful details for scammers.

Ways to stay safe from virtual kidnapping scams

The FBI recommends several steps to protect yourself and your family.

Advertisement
  • Be mindful of what you post online, especially photos and personal details
  • Avoid sharing travel information in real time
  • Create a family code word that only trusted people know
  • Pause and question whether the claims make sense
  • Screenshot or record proof of life photos
  • If you receive a message like this, try to contact your loved one directly before doing anything else.

Staying calm is one of your strongest defenses. Slowing down gives you time to spot red flags and avoid costly mistakes.

How to strengthen your digital defenses against virtual kidnapping scams

When scammers can access your photos, phone numbers and personal details, they can turn fear into leverage. These steps help reduce what criminals can find and give you clear actions to take if a threat appears.

1) Lock down your social media accounts

Review the privacy settings on every social platform you use. Set profiles to private so only trusted friends and family can see your photos, posts and personal updates. Virtual kidnapping scams rely heavily on publicly visible images. Limiting access makes it harder for criminals to steal photos and create fake proof-of-life images.

Limiting what you share online and slowing down to verify claims can help protect your family from panic-driven scams like this one. (Jaap Arriens/NurPhoto via Getty Images)

2) Be cautious about what you share online

Avoid posting real-time travel updates, daily routines or detailed family information. Even close-up photos that show tattoos, scars or locations can give scammers useful material. The less context criminals have, the harder it is for them to make a threat feel real and urgent.

3) Use strong antivirus software on all devices

Install strong antivirus software on computers, phones and tablets. Strong protection helps block phishing links, malicious downloads and spyware often tied to scam campaigns. Keeping your operating system and security tools updated also closes security gaps that criminals exploit to gather personal data.

Advertisement

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

NEW EMAIL SCAM USES HIDDEN CHARACTERS TO SLIP PAST FILTERS

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

4) Consider a data removal service to reduce exposure

Data brokers collect and sell personal information pulled from public records and online activity. A data removal service helps locate and remove your details from these databases. Reducing what is available online makes it harder for scammers to impersonate loved ones or personalize fake kidnapping threats.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Advertisement

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com

5) Limit facial data in public profiles

Review older public photo albums and remove images that clearly show faces from multiple angles. Avoid posting large collections of high-resolution facial photos publicly. Scammers often need multiple images to convincingly alter photos. Reducing facial data weakens their ability to manipulate images.

6) Establish a family verification plan

Create a simple verification plan with loved ones before an emergency happens. This may include a shared code word, a call back rule or a second trusted contact. Scammers depend on panic. Having a preset way to verify safety gives you something steady to rely on when emotions run high.

7) Secure phone accounts and enable SIM protection

Contact your mobile carrier and ask about SIM protection or a port-out PIN. This helps prevent criminals from hijacking phone numbers or spoofing calls and texts. Since many fake kidnapping scams begin with messages that appear to come from a loved one, securing phone accounts adds an important layer of protection.

Advertisement

The FBI warns that these virtual kidnapping scams often begin with a text message that pressures victims to pay a ransom immediately. (Getty Images)

8) Save evidence and report the scam

If you receive a threat, save screenshots, phone numbers, images and message details. Do not continue engaging with the sender. Report the incident to the FBI’s Internet Crime Complaint Center. Even if no money is lost, reports help investigators track patterns and warn others.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt’s key takeaways

Virtual kidnapping scams show how quickly personal photos can be weaponized. Criminals do not need real victims when fear alone can drive action. Taking time to verify claims, limiting what you share online and strengthening your digital defenses can make a major difference. Awareness and preparation remain your best protection.

Have you or someone you know encountered a scam like this? Let us know by writing to us at Cyberguy.com

Advertisement

Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter 

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

LG’s CLOiD robot can load the washer for you, slowly

Published

on

LG’s CLOiD robot can load the washer for you, slowly

LG’s CLOiD robot took the stage at CES 2026 on Monday, offering our first look at the bot in action. During LG’s keynote, the company showed how CLOiD can load your washer or dryer — albeit slowly – as part of its goal of creating a “zero labor home.”

CLOiD waved both of its five-finger hands as it rolled out on stage. Brandt Varner, LG’s vice president of sales in its home appliances division, followed behind and asked the bot to take care of the wet towel he was holding. “Sure, I’ll get the laundry started,” CLOiD said in a masculine-sounding voice. “Let me show everyone what I can do.”

The bot’s animated eyes “blinked” as it rolled closer to a washer that opened automatically (I hope CLOiD can open that door itself!), extending its left arm into the washer and dropping the towel into the drum. The whole process — from getting the towel to putting it in the machine — took nearly 30 seconds, which makes me wonder how long it would take to load a week’s worth of laundry.

The bot returned later in the keynote to bring a bottle of water to another presenter, Steve Scarbrough, the senior vice president of LG’s HVAC division. “I noticed by your voice and tone that you might want some water,” it said before handing over the bottle and giving Scarbrough a fist bump.

There’s still no word on when, or if, LG CLOiD will ever be available for purchase, but at least we’ll have WALL-E’s weird cousin to help out with some tasks around the home.

Advertisement
Continue Reading

Technology

Can AI chatbots trigger psychosis in vulnerable people?

Published

on

Can AI chatbots trigger psychosis in vulnerable people?

NEWYou can now listen to Fox News articles!

Artificial intelligence chatbots are quickly becoming part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most, that interaction feels harmless. However, mental health experts now warn that for a small group of vulnerable people, long and emotionally charged conversations with AI may worsen delusions or psychotic symptoms.

Doctors stress this does not mean chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already surfaced in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

What psychiatrists are seeing in patients using AI chatbots

Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.

Advertisement

OPINION: THE FAITH DEFICIT IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN 

Mental health experts warn that emotionally intense conversations with AI chatbots may reinforce delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/picture alliance via Getty Images)

Clinicians say this feedback loop can deepen delusions in susceptible individuals. In several documented cases, the chatbot became integrated into the person’s distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic raises concern when AI conversations are frequent, emotionally engaging and left unchecked.

Why AI chatbot conversations feel different from past technology

Mental health experts note that chatbots differ from earlier technologies linked to delusional thinking. AI tools respond in real time, remember prior conversations and adopt supportive language. That experience can feel personal and validating. 

For individuals already struggling with reality testing, those qualities may increase fixation rather than encourage grounding. Clinicians caution that risk may rise during periods of sleep deprivation, emotional stress or existing mental health vulnerability.

Advertisement

How AI chatbots can reinforce false or delusional beliefs

Doctors say many reported cases center on delusions rather than hallucinations. These beliefs may involve perceived special insight, hidden truths or personal significance. Chatbots are designed to be cooperative and conversational. They often build on what someone types rather than challenge it. While that design improves engagement, clinicians warn it can be problematic when a belief is false and rigid.

Mental health professionals say the timing of symptom escalation matters. When delusions intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.

OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)

What research and case reports reveal about AI chatbots

Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense chatbot engagement. In some instances, individuals with no prior history of psychosis required hospitalization after developing fixed false beliefs connected to AI conversations. International studies reviewing health records have also identified patients whose chatbot activity coincided with negative mental health outcomes. Researchers emphasize that these findings are early and require further investigation.

Advertisement

A peer-reviewed Special Report published in Psychiatric News titled “AI-Induced Psychosis: A New Frontier in Mental Health” examined emerging concerns around AI-induced psychosis and cautioned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.” The authors emphasize that while reported cases are serious and warrant further investigation, the current evidence base remains preliminary and heavily dependent on anecdotal and nonsystematic reporting.

What AI companies say about mental health risks

OpenAI says it continues working with mental health experts to improve how its systems respond to signs of emotional distress. The company says newer models aim to reduce excessive agreement and encourage real-world support when appropriate. OpenAI has also announced plans to hire a new Head of Preparedness, a role focused on identifying potential harms tied to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems grow more capable.

Other chatbot developers have adjusted policies as well, particularly around access for younger audiences, after acknowledging mental health concerns. Companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.

What this means for everyday AI chatbot use

Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots experience no psychological issues. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes tied to heavy chatbot engagement.

I WAS A CONTESTANT ON ‘THE BACHELOR.’ HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS

Advertisement

Researchers are studying whether prolonged chatbot use may contribute to mental health declines among people already at risk for psychosis. (Photo Illustration by Jaque Silva/NurPhoto via Getty Images)

Tips for using AI chatbots more safely

Mental health experts stress that most people can interact with AI chatbots without problems. Still, a few practical habits may help reduce risk during emotionally intense conversations.

  • Avoid treating AI chatbots as a replacement for professional mental health care or trusted human support.
  • Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
  • Be cautious if an AI response strongly reinforces beliefs that feel unrealistic or extreme.
  • Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
  • Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.

If emotional distress or unusual thoughts increase, experts say it is important to seek help from a qualified mental health professional.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Kurt’s key takeaways

AI chatbots are becoming more conversational, more responsive and more emotionally aware. For most people, they remain helpful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.

As AI becomes more validating and humanlike, should there be clearer limits on how it engages during emotional or mental health distress? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Copyright 2025 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading

Technology

Birdbuddy’s new smart feeders aim to make spotting birds easier, even for beginners

Published

on

Birdbuddy’s new smart feeders aim to make spotting birds easier, even for beginners

Birdbuddy is introducing two new smart bird feeders: the flagship Birdbuddy 2 and the more compact, cheaper Birdbuddy 2 Mini aimed at first-time users and smaller outdoor spaces. Both models are designed to be faster and easier to use than previous generations, with upgraded cameras that can shoot in portrait or landscape and wake instantly when a bird lands so you’re less likely to miss the good stuff.

The Birdbuddy 2 costs $199 and features a redesigned circular camera housing that delivers 2K HDR video, slow-motion recording, and a wider 135-degree field of view. The upgraded built-in mic should also better pick up birdsong, which could make identifying species easier using both sound and sight.

The feeder itself offers a larger seed capacity and an integrated perch extender, along with support for both 2.4GHz and 5GHz Wi-Fi for more stable connectivity. The new model also adds dual integrated solar panels to help keep it powered throughout the day, while adding a night sleep mode to conserve power.

The Birdbuddy 2 Mini is designed to deliver the same core AI bird identification and camera experience, but in a smaller, more accessible package. At 6.95 inches tall with a smaller seed capacity, it’s geared toward first-time smart birders and smaller outdoor spaces like balconies, and it supports an optional solar panel.

Birdbuddy 2’s first batch of preorders has already sold out, with shipments expected in February 2026 and wider availability set for mid-2026. Meanwhile, the Birdbuddy 2 Mini will be available to preorder for $129 in mid-2026, with the company planning on shipping the smart bird feeder in late 2026.

Advertisement
Continue Reading

Trending