Connect with us

Technology

AI deepfake romance scam steals woman’s home and life savings

Published

on

AI deepfake romance scam steals woman’s home and life savings

NEWYou can now listen to Fox News articles!

A woman named Abigail believed she was in a romantic relationship with a famous actor. The messages felt real. The voice sounded right. The video looked authentic. And the love felt personal. 

By the time her family realized what was happening, more than $81,000 was gone — and so was the paid-off home she planned to retire in.

We spoke with Vivian Ruvalcaba on my “Beyond Connected” podcast about what happened to her mother and how quickly the scam unfolded. What began as online messages quietly escalated into financial ruin and the loss of a family home. Vivian is Abigail’s daughter. She is now her mother’s advocate, investigator, chief advocate and protector.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

FROM FRIENDLY TEXT TO FINANCIAL TRAP: THE NEW SCAM TREND

Vivian Ruvalcaba says a deepfake video made the scam against her mom, Abigail, feel real, using a familiar face and voice to build trust. (Philip Dulian/picture alliance via Getty Images)

How the scam quietly started

The scam did not begin with a phone call or a threat. It began with a message. “Facebook is where it started,” Vivian explained. “She was directly messaged by an individual.” That individual claimed to be Steve Burton, a longtime star of “General Hospital.” Abigail watched the show regularly. She knew his face. She knew his voice.

After a short time, the conversation moved off Facebook. “He then led her to create an account with WhatsApp,” Vivian said. “When I discovered that, and I looked at the messaging, you can see all the manipulation.”

That shift mattered. This is a major red flag I often warn people about. When a scammer moves a conversation from a public platform like Facebook to an encrypted app like WhatsApp, it is usually deliberate and designed to avoid detection.

Advertisement

Grooming through secrecy and isolation

At first, Abigail told no one. “She was very, very secretive,” Vivian said. “She didn’t share any of this with anyone. Not my father. Not me.” 

That secrecy was not accidental. “She was being groomed not to share this information,” Vivian explained.

This is a tactic I see over and over again in scams like this. Once a scammer feels they have someone emotionally invested, the next step is to isolate them. They push victims to keep secrets and avoid talking to family, friends or police. When Vivian finally started asking questions, her mother reacted in a way she never had before. “She said, ‘It’s none of your business,’” Vivian said. “That was shocking.”

The deepfake video that changed everything

When Vivian threatened to go to the police, her mother finally revealed what had been happening. “That’s when she showed me the AI video,” Vivian said. In the clip, a man who looked and sounded like Steve Burton spoke directly to Abigail and referred to her as “Abigail, my queen.” The message felt personal. It used her name and promised love and reassurance.

“It wasn’t grainy,” Vivian said. “To the naked eye, you couldn’t tell.” Still, Vivian sensed something was off. “I looked at it, and I knew right away,” she said. “Mom, this is not real. This is AI.”

Advertisement

Her mother disagreed and argued back. She pointed to the face and the voice. She also believed the phone calls proved it. That is what makes deepfakes so dangerous. When a video looks and sounds real, it can override common sense and even years of trust within a family.

From gift cards to life savings

The money flowed slowly at first. A $500 gift card request raised the first alarm. Then, money orders and Zelle payments. What Vivian discovered next still haunts her. “She pulled out a sandwich baggie,” Vivian said. “About 110 gift cards ranging from $25 up to $500.” Those cards were purchased with credit cards. Cash was mailed. Bitcoin was sent. In total, the Los Angeles Police Department (LAPD) tallied the losses at $81,000. And the scam was not finished.

The scam against Abigail moved from social media to encrypted messaging, a common tactic used to avoid detection. (Kurt “CyberGuy” Knutsson)

When the scammer took her home

After draining Abigail’s available cash, the scam did not stop. It escalated again. The scammer began pushing her to sell the one asset she still had: her home. “He was pressing her to sell,” Vivian told me. “Because he wanted more money.” The pressure came wrapped in romance. The scammer told Abigail they would buy a beach house together and start a new life. In her mind, this was not a scam. It was a plan for the future. That belief set off a chain reaction.

How the home sale happened so quickly

Abigail sold her condo for $350,000, even though similar homes in the area were worth closer to $550,000 at the time. The sale happened quickly. There was no family involvement. Her husband was still living in the home, yet he did not sign the documents. “She just gave away about $200,000 in equity,” Vivian said. “They stole it.”

Advertisement

What makes this even more troubling is who bought the property. According to Vivian, the buyer was a wholesale real estate company that moved fast and asked very few questions. Messages later reviewed by the family show Abigail actively trying to hide the sale from her husband. In one text exchange, she warned the buyer not to park in the driveway because her husband had access to a Ring camera. That alone should have raised concerns. Instead, the buyers went along with it. “They appeased whatever she asked for,” Vivian said. “They were getting a property she was basically giving away.”

These buyers were not the original scammers, but they benefited from the pressure the scammer created. The scammer pushed Abigail to sell. The buyers took advantage of the situation and the deeply discounted price. The home was not extra money, it was Abigail’s retirement. It was the only real security she and her husband had after decades of work. By the time Vivian uncovered the sale, Abigail was days away from sending another $70,000 from the proceeds to the scammer. Had that transfer gone through, nearly everything would have been gone.

This is the part of the story people struggle to process. Modern AI-driven scams are no longer limited to draining bank accounts or gift cards. They now push victims into selling real property, often with opportunistic players waiting on the other side of the deal.

Why police and lawyers could not stop the damage

Vivian contacted the police the same day she realized her mother was being scammed. “They assigned an investigator,” she told me. “He was already very aware of the situation and how little they can help.” That reality is difficult for families to hear, but it is common. 

Many large-scale scams operate overseas. The money moves quickly through gift cards, wire transfers and crypto. By the time victims realize what is happening, the trail is often cold. “Most of these scammers are out of the country,” Vivian said. “No one is being held accountable.”

Advertisement

When the case shifted from criminal to civil

Law enforcement documented the losses and opened a case, but there was little they could do to recover the money or stop what had already happened. The deeper damage came from the home sale, which fell into a legal gray area far beyond a typical fraud report. Once the condo was sold, the situation shifted from a criminal scam to a complex civil fight.

Vivian immediately began searching for legal help. The first attorneys she contacted discouraged her. One told her it could cost more than $150,000 to pursue a case. Another failed to act even after being told about Abigail’s mental illness and history of bipolar disorder. At one point, an eviction attorney testified in court that Vivian never mentioned the romance scam, something she strongly disputes.

By March, Abigail and her husband were forced out of their home. By October, they were fully evicted and locked out. Both parents are now displaced. Abigail is living with family out of state. Her husband, now in his mid-70s, is still working because the home was his retirement. 

It was only after reaching out through personal connections that Vivian found an attorney willing to fight. That attorney is now pursuing the case on a contingency basis, meaning the family does not pay unless there is a recovery. The legal argument centers on Abigail’s mental capacity and whether she could legally understand and execute a home sale under the circumstances. The buyers dispute that claim. The outcome will be decided in court.

This is why stories like this rarely end with a police arrest or quick resolution. Once a scam crosses into real estate and civil law, families are often left to navigate an expensive and exhausting legal system on their own. And by then, the damage has already been done.

Advertisement

Why shame keeps scams hidden

Many victims never report scams. Only about 22% contact the FBI. Fewer than 30% reach out to their local police department. Vivian understands why that happens. “She’s ashamed,” Vivian said. “I know she is.” That shame protects scammers. Silence gives them room to move on and target the next victim.

INSIDE A SCAMMER’S DAY AND HOW THEY TARGET YOU

What started as online messages escalated into gift cards, lost savings and the sale of a family home. (Kurt “CyberGuy” Knutsson)

Red flags families cannot ignore

This case reveals warning signs every family needs to recognize early.

Red flags to watch for

  • Sudden secrecy about finances or online activity
  • Requests for gift cards, cash or crypto
  • Pressure to move conversations to encrypted apps
  • AI videos or voice messages used as proof of identity
  • Emotional manipulation tied to urgency or romance
  • Requests to sell property or move large assets

I want to be very clear about this. It does not matter how smart you are or how careful you think you are. You can become a victim and not realize it until it is too late.

Tips to stay safe and protect your family

These lessons come from both Vivian’s experience and the patterns I see repeatedly in modern scams. Some are emotional. Others are technical. Together, they can help families spot trouble sooner and limit the damage when something feels off.

Advertisement

1) Watch for platform changes

Moving a conversation from Facebook to WhatsApp or another encrypted app is not harmless. Scammers do this to avoid moderation and make messages harder to trace or flag.

2) Question AI proof

Deepfake videos and cloned voices can look and sound convincing. Never treat a video or voice message as proof of identity, especially when money or property is involved.

3) Slow down major financial decisions

Scammers create urgency on purpose. Any request involving large sums, property sales or retirement assets should pause until a trusted third party reviews it.

4) Never send gift cards, cash or crypto

Legitimate people do not ask for payment through gift cards or cryptocurrency. These methods are a common scam tactic because they are hard to trace and nearly impossible to recover.

5) Talk openly as a family

Silence helps scammers. Regular conversations about finances, online contacts and unusual requests make it easier to spot problems early and step in without shame.

Advertisement

6) Reduce online exposure with a data removal service

Scammers research their targets using public databases. They pull names, phone numbers, relatives and property records. Removing that data reduces how easily criminals can build a profile.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

7) Use strong antivirus protection

Malware links can expose financial accounts without obvious signs. Good antivirus software can block malicious links before they lead to deeper access or data theft.

Advertisement

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

8) Protect assets early

Living trusts and proper estate planning add protection before a crisis hits. They can help prevent rushed property sales and limit who can legally move assets without oversight.

9) Use conservatorship when capacity is limited

“Conservatorship is the only way,” Vivian said. “Power of attorney may not be enough.” When a loved one has diminished capacity, a conservatorship adds court oversight and can stop unauthorized financial decisions before serious damage occurs.

Kurt’s key takeaways

This scam did not rely on sloppy emails or obvious mistakes. It used emotion, familiarity and AI that looked real. Once trust was built, the damage followed quickly. Money disappeared. Secrecy grew. Pressure increased. The home was sold. What makes this case especially painful is the speed. A few messages led to gift cards. Gift cards turned into life savings. Life savings became the loss of a home built over decades. Most families never expect this to happen. Many do not talk about it until it has already happened. The lesson is clear. Awareness matters more than intelligence. Open conversations matter more than embarrassment. Acting early matters more than trying to undo the damage later. If you want to hear Vivian tell this story in her own words and understand how fast these scams unfold, listen to our full conversation on the “Beyond Connected” podcast.

Advertisement

If a deepfake video showed up on your parent’s phone tonight, would you know before everything was gone? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement

Technology

AI influencer awards season is upon us

Published

on

AI influencer awards season is upon us

First came the AI beauty pageant. Then the AI music contests. Now, there is an award for AI Personality of the Year — perhaps the inevitable next step for the AI influencer economy as it transforms from quirky novelty into a serious and lucrative industry.

The contest, a joint venture between generative AI studio OpenArt and AI-powered creator platform Fanvue, with backing from AI voice company ElevenLabs, opens on Monday and runs for a month. The organizers said it is intended to “celebrate the creative talent ‘behind’ AI Influencers” and recognize their growing commercial and cultural clout.

Contestants will compete for a total prize fund of $20,000, which will be split between an overall winner and individual categories of fitness, lifestyle, comedian, music and dance entertainer, and fictional cartoon, anime, or fantasy personality. Victors will be celebrated at an event in May that the organizers are dubbing the “‘Oscars’ for AI personalities.”

To enter, you must develop your AI influencer on OpenArt’s platform and submit it at www.AIpersonality.ai. You’ll be asked for social media handles across TikTok, X, YouTube, and Instagram, as well as the story behind the character, your motivations for creating it, and details of any brand work.

Among those assessing contestants are 13‑time Emmy‑winning comedy writer Gil Rief, the creators of Spanish AI model Aitana Lopez, and Christopher “Topher” Townsend, the MAGA rapper behind AI-generated gospel singer Solomon Ray. According to a copy of the judges’ briefing seen by The Verge, contestants will be scored on four criteria: quality, social clout, brand appeal, and the inspiration behind the avatar. Specific points include reliably engaging with followers, portraying a consistent look across social channels, accurate details like having the “right number of fingers and thumbs,” and having “an authentic narrative” behind the avatar.

Advertisement

The contest is open to established creators and novices alike, though existing AI influencers will still need to submit material produced on OpenArt’s platform, Matt Jones, head of brand at Fanvue, told The Verge.

Despite being designed to celebrate creators of virtual influencers, Jones said that entrants don’t need to publicly identify themselves. “If a person who created this amazing piece of work wants nothing to do with the press or to expose themselves or to have their name out there, that’s obviously fine,” he said. “There would be no need to thrust anybody into the limelight here. We would just celebrate the piece of work.”

That creators can remain anonymous feels odd for a contest judging authenticity, particularly in an AI influencer ecosystem built on fictional people, fake personas, and fabricated backstories. That same anonymity has also helped grifts flourish with little accountability, from the AI white nationalist rapper Danny Bones to MAGA fantasy girl Jessica Foster.

There’s familiar baggage too, including persistent questions about originality, whether AI-generated work, or even a likeness, has been lifted from real creators, and whether these tools simply reproduce the same old biases in synthetic form. Organizer Fanvue has already faced criticism for this in the past: in 2024, a Guardian columnist described its “Miss AI” beauty pageant as something that “take(s) every toxic gendered beauty norm and bundle(s) them up into a completely unrealistic package.”

To Fanvue’s Jones, creators inevitably leave something of themselves in the AI characters they make. “You can’t help but put a little bit of yourself into the stories that you tell and the characters that you make,” he said, urging creators to “lean into that.” The idea feels at home in the influencer economy: not strictly real, but a form of synthetic authenticity the internet already knows how to handle.

Advertisement
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Continue Reading

Technology

Amazon Health AI brings a doctor to your pocket

Published

on

Amazon Health AI brings a doctor to your pocket

NEWYou can now listen to Fox News articles!

Most people have had this moment. You feel a strange symptom, open your phone and start searching online. Within minutes, you are deep in medical forums reading worst-case scenarios. By the end, you are either terrified or more confused than when you started.

Health care should feel clearer than that. Yet for many of us, it rarely does. Appointments take weeks. Medical records are hard to understand. You often have to repeat the same health history at every visit. Insurance rules feel like a maze.

According to the American Academy of Physician Associates, many Americans say navigating the healthcare system feels overwhelming and they wish doctors had more time to listen. Now, a new tool from Amazon hopes to change that experience. It is called Amazon Health AI.

Sign up for my FREE CyberGuy Report

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

$163K IN FAKE MEDICAL BILL CHARGES, AI UNCOVERS IT FOR YOU

Amazon Health AI lets you ask health questions, review records and connect with care directly through the Amazon app. (Kurt “CyberGuy” Knutsson)

What Amazon Health AI actually does

Amazon Health AI, available at amazon.com/health-ai, acts as a digital health assistant that can answer medical questions and help guide you through your care. The tool lives inside the Amazon app and website.

You start by typing a health question into a chat box. From there, the system can:

Advertisement
  • Explain lab results in plain language
  • Review symptoms and suggest next steps
  • Help schedule care with a provider
  • Assist with prescription renewals
  • Recommend relevant health products if asked

Health AI connects directly with clinicians from Amazon One Medical when professional care is needed. You can message a provider, start a video visit or schedule an in-person appointment. The goal is to make getting care simpler. Instead of spending time searching for appointments or jumping between different apps, you can move from a question to a provider more quickly. If symptoms suggest a possible emergency, the system may advise you to contact emergency services, such as calling 911.

Amazon is gradually rolling the Health AI tool out to U.S. customers, and availability varies by location.

CyberGuy reached out to Amazon for comment about the new service. Andrew Diamond, Ph.D., M.D., chief medical officer at Amazon One Medical, said the goal is to reduce some of the everyday frustrations people face when navigating healthcare.

“Nearly two-thirds of Americans feel overwhelmed by the healthcare system and wish their doctors had more time to understand their concerns,” Diamond said. “Health AI is designed to handle the logistical and informational work that creates friction in healthcare, so patients and providers can spend more time on what matters most: the human relationship at the heart of healing.”

How Amazon Health AI uses your medical history

Health AI becomes more useful when it understands your medical history.

With permission, the system can access information such as:

Advertisement
  • Past diagnoses
  • Medications
  • Lab results
  • Doctor’s notes

This data flows through a secure national network called the Health Information Exchange. Health AI can access records from hundreds of thousands of providers nationwide once permission is granted.

For example, imagine someone with asthma develops a cough during flu season. A generic search might treat that symptom like any other cough. Health AI can look at your history and ask follow-up questions based on your specific risk factors.

Health AI can provide general information about someone else’s health question, but personalized answers are limited to the medical history of the account holder.

That context helps the system provide more relevant guidance. Still, the assistant does not replace doctors. When the situation requires medical judgment, it connects you with a real clinician.

CHATGPT COULD MISS YOUR SERIOUS MEDICAL EMERGENCY, NEW STUDY SUGGESTS

Health AI can help explain lab results, check symptoms and connect you with care through your phone. (Amazon)

Advertisement

How Amazon connects AI with real medical care

The service works closely with Amazon One Medical providers. Prescription renewals can also move through the system, with requests sent to a One Medical provider who reviews the request before approval. You can fill prescriptions through Amazon Pharmacy or another pharmacy you prefer. This approach helps reduce the steps people often face when trying to get care. Instead of spending time searching for appointments or jumping between different apps, you can move from a question to a provider more quickly.

Special access for Prime members

Amazon is also adding a limited introductory benefit. Eligible members of Amazon Prime can receive up to five free message-based consultations with a One Medical provider.

Neil Lindsay, senior vice president of Amazon Health Services, said the goal is to make care easier to access through the tools people already use. “Eligible Prime member accounts get up to five free direct message care consultations with a One Medical provider for any of the 30 common conditions,” Lindsay said.

These visits cover common conditions, including:

  • Colds and flu
  • Allergies and acid reflux
  • Pink eye and UTIs
  • Hair loss and skin care

Outside the promotion, message or telehealth visits typically cost about $29. A full One Medical membership provides broader virtual care and costs less for Prime members than for non-members.

How Amazon says it protects health data

Health information raises serious privacy questions. Amazon says Health AI runs inside a HIPAA-compliant environment with strong encryption and strict access controls. According to the company, personal health data is not used to sell ads. Amazon also says protected health information from One Medical and Amazon Pharmacy is not used for advertising or sold to third parties.

Advertisement

The system also includes safety guardrails. If the AI cannot confidently answer a question, it directs you to a human provider. Behind the scenes, the technology runs on Amazon’s AI platform called Amazon Bedrock.

Amazon also emphasized that Health AI was designed alongside medical professionals rather than built purely as a technology product.

“This isn’t a chatbot with a healthcare skin,” said Prakash Bulusu, chief technology officer at Amazon Health Services. “It’s a system designed from the ground up to be personalized, trustworthy and useful.”

Bulusu said he personally tested the system with his own health data, and it surfaced lab work he had forgotten to complete after a physical exam.

CHATGPT HEALTH PROMISES PRIVACY FOR HEALTH CONVERSATIONS

Advertisement

You can ask Health AI about symptoms and receive guidance before deciding whether to seek medical care.  (Amazon)

Why Amazon believes AI belongs in healthcare

Millions of people already search Amazon for vitamins, blood pressure monitors and health products. The company believes AI can help guide those searches and connect them with medical advice. Amazon also partnered with major health systems, including the Cleveland Clinic and Rush University System for Health, to create smoother referrals between primary care and specialists. The idea is continuity. You should not feel like you are starting from scratch every time you see a new provider.

What this means for you

Tools like Health AI show how quickly artificial intelligence is moving into everyday health decisions. For patients, the potential benefits are clear. Faster answers. Simpler records. Easier access to doctors.

Yet it also raises big questions about privacy, data control and how much we rely on automated systems for health advice. AI can help people understand their health. But the human doctor still plays the absolute most important role. The challenge will be finding the right balance.

Take my quiz: How safe is your online security?

Advertisement

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.       

Kurt’s key takeaways

Healthcare can be frustrating. Long waits, confusing records and disconnected systems often leave you feeling lost. Amazon believes AI can help guide you through that process. If the technology works as promised, it could help millions of us understand our health faster and reach care sooner. Still, any system that handles sensitive medical information must earn trust over time. That trust will depend on transparency, security and how responsibly companies use personal health data.

Would you feel comfortable letting an AI assistant review your medical history and guide your health decisions? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

Crimson Desert dev apologizes for use of AI art

Published

on

Crimson Desert dev apologizes for use of AI art

Reviews of Crimson Desert have been mixed, but the bigger issue for the game has been the discovery of what appeared to be AI-generated assets in the final release. Now the developer has acknowledged that AI art was indeed used during the game’s creation, but says that it was intended to be replaced before release. In a statement on X, the company said it was conducting a “comprehensive audit” to identify and replace any AI-generated content.

The company apologized for both its inclusion in the final release and for not being more transparent about its use during development. “We should have clearly disclosed our use of AI,” it said.

The use of generative AI in gaming has become a hot-button issue of the last couple of years as it’s made its way into several high-profile titles. While some large studios have embraced it, many smaller developers have revolted against the trend, proudly proclaiming their games to be “AI free.”

Continue Reading

Trending