Connect with us

Technology

AI deepfake romance scam steals woman’s home and life savings

Published

on

AI deepfake romance scam steals woman’s home and life savings

NEWYou can now listen to Fox News articles!

A woman named Abigail believed she was in a romantic relationship with a famous actor. The messages felt real. The voice sounded right. The video looked authentic. And the love felt personal. 

By the time her family realized what was happening, more than $81,000 was gone — and so was the paid-off home she planned to retire in.

We spoke with Vivian Ruvalcaba on my “Beyond Connected” podcast about what happened to her mother and how quickly the scam unfolded. What began as online messages quietly escalated into financial ruin and the loss of a family home. Vivian is Abigail’s daughter. She is now her mother’s advocate, investigator, chief advocate and protector.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

FROM FRIENDLY TEXT TO FINANCIAL TRAP: THE NEW SCAM TREND

Vivian Ruvalcaba says a deepfake video made the scam against her mom, Abigail, feel real, using a familiar face and voice to build trust. (Philip Dulian/picture alliance via Getty Images)

How the scam quietly started

The scam did not begin with a phone call or a threat. It began with a message. “Facebook is where it started,” Vivian explained. “She was directly messaged by an individual.” That individual claimed to be Steve Burton, a longtime star of “General Hospital.” Abigail watched the show regularly. She knew his face. She knew his voice.

After a short time, the conversation moved off Facebook. “He then led her to create an account with WhatsApp,” Vivian said. “When I discovered that, and I looked at the messaging, you can see all the manipulation.”

That shift mattered. This is a major red flag I often warn people about. When a scammer moves a conversation from a public platform like Facebook to an encrypted app like WhatsApp, it is usually deliberate and designed to avoid detection.

Advertisement

Grooming through secrecy and isolation

At first, Abigail told no one. “She was very, very secretive,” Vivian said. “She didn’t share any of this with anyone. Not my father. Not me.” 

That secrecy was not accidental. “She was being groomed not to share this information,” Vivian explained.

This is a tactic I see over and over again in scams like this. Once a scammer feels they have someone emotionally invested, the next step is to isolate them. They push victims to keep secrets and avoid talking to family, friends or police. When Vivian finally started asking questions, her mother reacted in a way she never had before. “She said, ‘It’s none of your business,’” Vivian said. “That was shocking.”

The deepfake video that changed everything

When Vivian threatened to go to the police, her mother finally revealed what had been happening. “That’s when she showed me the AI video,” Vivian said. In the clip, a man who looked and sounded like Steve Burton spoke directly to Abigail and referred to her as “Abigail, my queen.” The message felt personal. It used her name and promised love and reassurance.

“It wasn’t grainy,” Vivian said. “To the naked eye, you couldn’t tell.” Still, Vivian sensed something was off. “I looked at it, and I knew right away,” she said. “Mom, this is not real. This is AI.”

Advertisement

Her mother disagreed and argued back. She pointed to the face and the voice. She also believed the phone calls proved it. That is what makes deepfakes so dangerous. When a video looks and sounds real, it can override common sense and even years of trust within a family.

From gift cards to life savings

The money flowed slowly at first. A $500 gift card request raised the first alarm. Then, money orders and Zelle payments. What Vivian discovered next still haunts her. “She pulled out a sandwich baggie,” Vivian said. “About 110 gift cards ranging from $25 up to $500.” Those cards were purchased with credit cards. Cash was mailed. Bitcoin was sent. In total, the Los Angeles Police Department (LAPD) tallied the losses at $81,000. And the scam was not finished.

The scam against Abigail moved from social media to encrypted messaging, a common tactic used to avoid detection. (Kurt “CyberGuy” Knutsson)

When the scammer took her home

After draining Abigail’s available cash, the scam did not stop. It escalated again. The scammer began pushing her to sell the one asset she still had: her home. “He was pressing her to sell,” Vivian told me. “Because he wanted more money.” The pressure came wrapped in romance. The scammer told Abigail they would buy a beach house together and start a new life. In her mind, this was not a scam. It was a plan for the future. That belief set off a chain reaction.

How the home sale happened so quickly

Abigail sold her condo for $350,000, even though similar homes in the area were worth closer to $550,000 at the time. The sale happened quickly. There was no family involvement. Her husband was still living in the home, yet he did not sign the documents. “She just gave away about $200,000 in equity,” Vivian said. “They stole it.”

Advertisement

What makes this even more troubling is who bought the property. According to Vivian, the buyer was a wholesale real estate company that moved fast and asked very few questions. Messages later reviewed by the family show Abigail actively trying to hide the sale from her husband. In one text exchange, she warned the buyer not to park in the driveway because her husband had access to a Ring camera. That alone should have raised concerns. Instead, the buyers went along with it. “They appeased whatever she asked for,” Vivian said. “They were getting a property she was basically giving away.”

These buyers were not the original scammers, but they benefited from the pressure the scammer created. The scammer pushed Abigail to sell. The buyers took advantage of the situation and the deeply discounted price. The home was not extra money, it was Abigail’s retirement. It was the only real security she and her husband had after decades of work. By the time Vivian uncovered the sale, Abigail was days away from sending another $70,000 from the proceeds to the scammer. Had that transfer gone through, nearly everything would have been gone.

This is the part of the story people struggle to process. Modern AI-driven scams are no longer limited to draining bank accounts or gift cards. They now push victims into selling real property, often with opportunistic players waiting on the other side of the deal.

Why police and lawyers could not stop the damage

Vivian contacted the police the same day she realized her mother was being scammed. “They assigned an investigator,” she told me. “He was already very aware of the situation and how little they can help.” That reality is difficult for families to hear, but it is common. 

Many large-scale scams operate overseas. The money moves quickly through gift cards, wire transfers and crypto. By the time victims realize what is happening, the trail is often cold. “Most of these scammers are out of the country,” Vivian said. “No one is being held accountable.”

Advertisement

When the case shifted from criminal to civil

Law enforcement documented the losses and opened a case, but there was little they could do to recover the money or stop what had already happened. The deeper damage came from the home sale, which fell into a legal gray area far beyond a typical fraud report. Once the condo was sold, the situation shifted from a criminal scam to a complex civil fight.

Vivian immediately began searching for legal help. The first attorneys she contacted discouraged her. One told her it could cost more than $150,000 to pursue a case. Another failed to act even after being told about Abigail’s mental illness and history of bipolar disorder. At one point, an eviction attorney testified in court that Vivian never mentioned the romance scam, something she strongly disputes.

By March, Abigail and her husband were forced out of their home. By October, they were fully evicted and locked out. Both parents are now displaced. Abigail is living with family out of state. Her husband, now in his mid-70s, is still working because the home was his retirement. 

It was only after reaching out through personal connections that Vivian found an attorney willing to fight. That attorney is now pursuing the case on a contingency basis, meaning the family does not pay unless there is a recovery. The legal argument centers on Abigail’s mental capacity and whether she could legally understand and execute a home sale under the circumstances. The buyers dispute that claim. The outcome will be decided in court.

This is why stories like this rarely end with a police arrest or quick resolution. Once a scam crosses into real estate and civil law, families are often left to navigate an expensive and exhausting legal system on their own. And by then, the damage has already been done.

Advertisement

Why shame keeps scams hidden

Many victims never report scams. Only about 22% contact the FBI. Fewer than 30% reach out to their local police department. Vivian understands why that happens. “She’s ashamed,” Vivian said. “I know she is.” That shame protects scammers. Silence gives them room to move on and target the next victim.

INSIDE A SCAMMER’S DAY AND HOW THEY TARGET YOU

What started as online messages escalated into gift cards, lost savings and the sale of a family home. (Kurt “CyberGuy” Knutsson)

Red flags families cannot ignore

This case reveals warning signs every family needs to recognize early.

Red flags to watch for

  • Sudden secrecy about finances or online activity
  • Requests for gift cards, cash or crypto
  • Pressure to move conversations to encrypted apps
  • AI videos or voice messages used as proof of identity
  • Emotional manipulation tied to urgency or romance
  • Requests to sell property or move large assets

I want to be very clear about this. It does not matter how smart you are or how careful you think you are. You can become a victim and not realize it until it is too late.

Tips to stay safe and protect your family

These lessons come from both Vivian’s experience and the patterns I see repeatedly in modern scams. Some are emotional. Others are technical. Together, they can help families spot trouble sooner and limit the damage when something feels off.

Advertisement

1) Watch for platform changes

Moving a conversation from Facebook to WhatsApp or another encrypted app is not harmless. Scammers do this to avoid moderation and make messages harder to trace or flag.

2) Question AI proof

Deepfake videos and cloned voices can look and sound convincing. Never treat a video or voice message as proof of identity, especially when money or property is involved.

3) Slow down major financial decisions

Scammers create urgency on purpose. Any request involving large sums, property sales or retirement assets should pause until a trusted third party reviews it.

4) Never send gift cards, cash or crypto

Legitimate people do not ask for payment through gift cards or cryptocurrency. These methods are a common scam tactic because they are hard to trace and nearly impossible to recover.

5) Talk openly as a family

Silence helps scammers. Regular conversations about finances, online contacts and unusual requests make it easier to spot problems early and step in without shame.

Advertisement

6) Reduce online exposure with a data removal service

Scammers research their targets using public databases. They pull names, phone numbers, relatives and property records. Removing that data reduces how easily criminals can build a profile.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

7) Use strong antivirus protection

Malware links can expose financial accounts without obvious signs. Good antivirus software can block malicious links before they lead to deeper access or data theft.

Advertisement

The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

8) Protect assets early

Living trusts and proper estate planning add protection before a crisis hits. They can help prevent rushed property sales and limit who can legally move assets without oversight.

9) Use conservatorship when capacity is limited

“Conservatorship is the only way,” Vivian said. “Power of attorney may not be enough.” When a loved one has diminished capacity, a conservatorship adds court oversight and can stop unauthorized financial decisions before serious damage occurs.

Kurt’s key takeaways

This scam did not rely on sloppy emails or obvious mistakes. It used emotion, familiarity and AI that looked real. Once trust was built, the damage followed quickly. Money disappeared. Secrecy grew. Pressure increased. The home was sold. What makes this case especially painful is the speed. A few messages led to gift cards. Gift cards turned into life savings. Life savings became the loss of a home built over decades. Most families never expect this to happen. Many do not talk about it until it has already happened. The lesson is clear. Awareness matters more than intelligence. Open conversations matter more than embarrassment. Acting early matters more than trying to undo the damage later. If you want to hear Vivian tell this story in her own words and understand how fast these scams unfold, listen to our full conversation on the “Beyond Connected” podcast.

Advertisement

If a deepfake video showed up on your parent’s phone tonight, would you know before everything was gone? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Here’s your first look at Kratos in Amazon’s God of War show

Published

on

Here’s your first look at Kratos in Amazon’s God of War show

Amazon has slowly been teasing out casting details for its live-action adaptation of God of War, and now we have our first look at the show. It’s a single image but a notable one showing protagonist Kratos and his son Atreus. The characters are played by Ryan Hurst and Callum Vinson, respectively, and they look relatively close to their video game counterparts.

There aren’t a lot of other details about the show just yet, but this is Amazon’s official description:

The God of War series storyline follows father and son Kratos and Atreus as they embark on a journey to spread the ashes of their wife and mother, Faye. Through their adventures, Kratos tries to teach his son to be a better god, while Atreus tries to teach his father how to be a better human.

That sounds a lot like the recent soft reboot of the franchise, which started with 2018’s God of War and continued through Ragnarök in 2022. For the Amazon series, Ronald D. Moore, best-known for his work on For All Mankind and Battlestar Galactica, will serve as showrunner. The rest of the cast includes: Mandy Patinkin (Odin), Ed Skrein (Baldur), Max Parker (Heimdall), Ólafur Darri Ólafsson (Thor), Teresa Palmer (Sif), Alastair Duncan (Mimir), Jeff Gulka (Sindri), and Danny Woodburn (Brok).

While production is underway on the God of War series, there’s no word on when it might start streaming.

Advertisement
Continue Reading

Technology

300,000 Chrome users hit by fake AI extensions

Published

on

300,000 Chrome users hit by fake AI extensions

NEWYou can now listen to Fox News articles!

Your web browser may feel like a safe place, especially when you install helpful tools that promise to make your life easier. But security researchers have uncovered a dangerous campaign in which more than 300,000 people installed Chrome extensions pretending to be artificial intelligence (AI) assistants. Instead of helping, these fake tools secretly collect sensitive information like your emails, passwords and browsing activity.

They used familiar names like ChatGPT, Gemini and AI Assistant. If you use Chrome and have installed any AI-related extension, your personal information may already be exposed. Even worse, some of these malicious extensions are still available today, putting more people at risk without their knowing.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

More than 300,000 Chrome users installed fake AI extensions that secretly harvested sensitive data. (Kurt “CyberGuy” Knutsson)

Advertisement

What you need to know about fake AI extensions

Security researchers at browser security company LayerX discovered a large campaign involving 30 malicious Chrome extensions disguised as AI-powered assistants (via BleepingComputer). Together, these extensions were installed more than 300,000 times by unsuspecting users.

Some of the most popular extensions included names like AI Sidebar with 70,000 users, AI Assistant with 60,000 users, ChatGPT Translate with 30,000 users, and Google Gemini with 10,000 users. Another extension called Gemini AI Sidebar had 80,000 users before it was removed.

These extensions were distributed through the official Chrome Web Store, which made them appear legitimate and trustworthy. Even more concerning, researchers found that many of these extensions were connected to the same malicious server, showing they were part of a coordinated effort.

While some extensions have since been removed, others remain available. This means new users could still unknowingly install them and expose their personal data. Here’s the list of the affected extensions:

  • AI Assistant
  • Llama
  • Gemini AI Sidebar
  • AI Sidebar
  • ChatGPT Sidebar
  • Grok
  • Asking ChatGPT
  • ChatGBT
  • Chat Bot GPT
  • Grok Chatbot
  • Chat With Gemini
  • XAI
  • Google Gemini
  • Ask Gemini
  • AI Letter Generator
  • AI Message Generator
  • AI Translator
  • AI For Translation
  • AI Cover Letter Generator
  • AI Image Generator ChatGPT
  • Ai Wallpaper Generator
  • Ai Picture Generator
  • DeepSeek Download
  • AI Email Writer
  • Email Generator AI
  • DeepSeek Chat
  • ChatGPT Picture Generator
  • ChatGPT Translate
  • AI GPT
  • ChatGPT Translation
  • ChatGPT for Gmail

FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

These malicious tools were listed in the official Chrome Web Store, making them appear legitimate and trustworthy. (LayerX)

Advertisement

How the fake AI Chrome extension attack works

These fake extensions pretend to offer helpful AI features, such as translating text, summarizing emails, or acting as an AI assistant. But behind the scenes, they quietly monitor what you are doing online.

Once installed, the extension gains permission to view and interact with the websites you visit. This allows it to read the contents of web pages, including login screens where you enter your username and password.

In some cases, the extensions specifically targeted Gmail. They could read your email messages directly from your browser, including emails you received and even drafts you were still writing. This means attackers could access private conversations, financial information and sensitive personal details.

The extensions then sent this information to servers controlled by the attackers. Because they loaded content remotely, the attackers could change their behavior at any time without needing to update the extension.

Some versions could also activate voice features through your browser. This could potentially capture spoken conversations near your device and send transcripts back to the attackers.

Advertisement

If you installed one of these extensions, attackers may already have access to extremely sensitive information. This includes your email content, login credentials, browsing habits and possibly even voice recordings.

We reached out to Google for comment, and a spokesperson told CyberGuy that the company “can confirm that the extensions from this report have all been removed from the Google Web Store.”

BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK

Once installed, the extensions could read emails, capture passwords, monitor browsing activity and send the data to attacker-controlled servers. (Bildquelle/ullstein bild via Getty Images)

7 ways you can protect yourself from malicious Chrome extensions

If you have ever installed an AI-related Chrome extension, taking a few simple precautions now can help protect your accounts and prevent further damage.

Advertisement

1) Remove any suspicious or unused browser extensions

On a Windows PC or Mac, open Chrome and type chrome://extensions into the address bar. Review every extension listed. If you see anything unfamiliar, especially AI assistants you don’t remember installing, click “Remove” immediately. Malicious extensions depend on going unnoticed. Removing them stops further data collection and cuts off the attacker’s access to your information.

2) Change your passwords

If you installed any suspicious extension, assume your passwords may be compromised. Start by changing your email password first, since email controls access to most other accounts. Then update passwords for banking, shopping and social media accounts. This prevents attackers from using stolen credentials to break into your accounts.

3) Use a password manager to create and protect strong passwords

A password manager generates unique, complex passwords for each account and stores them securely. This prevents attackers from accessing multiple accounts if one password is stolen. Password managers also alert you if your login credentials appear in known data breaches, helping you respond quickly and protect your identity. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

4) Install strong antivirus software and keep it active

Good antivirus software can detect malicious browser extensions, spyware, and other hidden threats. It scans your system for suspicious activity and blocks harmful programs before they can steal your information. This adds an important layer of protection that works continuously in the background to keep your device safe. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

5) Use an identity theft protection service

Identity theft protection services monitor your personal data, including email addresses, financial accounts, and Social Security numbers, for signs of misuse. If criminals try to open accounts or commit fraud using your information, you receive alerts quickly. Early detection allows you to act fast and limit financial and personal damage. See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.

Advertisement

6) Keep your browser and computer fully updated

Software updates fix security vulnerabilities that attackers exploit. Enable automatic updates for Chrome and your operating system so you always have the latest protections. These updates strengthen your defenses against malicious extensions and prevent attackers from taking advantage of known weaknesses.

7) Use a personal data removal service

Personal data removal services scan data broker websites that collect and sell your personal information. They help remove your data from these sites, reducing what attackers can find and use against you. Less exposed information means fewer opportunities for criminals to target you with scams, identity theft or phishing attacks.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

Kurt’s key takeaway

Even tools designed to make your life easier can become tools for cybercriminals. Malicious extensions often hide behind trusted names and convincing features, making them difficult to spot. You can significantly reduce your risk by reviewing your browser extensions regularly, removing anything suspicious and using protective tools like password managers and strong antivirus software.

Advertisement

Have you checked your browser extensions recently? Let us know your thoughts by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report 
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement

Related Article

Malicious browser extensions hit 4.3M users
Continue Reading

Technology

Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance

Published

on

Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance

Less than 24 hours before the deadline in an ultimatum issued by the Pentagon, Anthropic has refused the Department of Defense’s demands for unrestricted access to its AI.

It’s the culmination of a dramatic exchange of public statements, social media posts, and behind-the-scenes negotiations, coming down to Defense Secretary Pete Hegseth’s desire to renegotiate all AI labs’ current contracts with the military. But Anthropic, so far, has refused to back down from its two current red lines: no mass surveillance of Americans, and no lethal autonomous weapons (or weapons with license to kill targets with no human oversight whatsoever). OpenAI and xAI had reportedly already agreed to the new terms, while Anthropic’s refusal had led to CEO Dario Amodei being summoned to the White House this week for a meeting with Hegseth himself, in which the Secretary reportedly issued an ultimatum to the CEO to back down by the end of business day on Friday or else.

In a statement late Thursday, Amodei wrote, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community.”

He added that the company has “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner” but that in a “narrow set of cases, we believe AI can undermine, rather than defend, democratic values” — going on to specifically mention mass domestic surveillance and fully autonomous weapons. (Amodei mentioned that “partial autonomous weapons … are vital to the defense of democracy” and that fully autonomous weapons may eventually “prove critical for our national defense,” but that “today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” He did not rule out Anthropic acquiescing to the military’s use of fully autonomous weapons in the future but mentioned that they were not ready now.)

The Pentagon had already reportedly asked major defense contractors to assess their dependence on Anthropic’s Claude, which could be seen as the first step to designating the company a “supply chain risk” – a public threat that the Pentagon had made recently (and a classification usually reserved for threats to national security). The Pentagon was also reportedly considering invoking the Defense Production Act to make Anthropic comply.

Advertisement

Amodei wrote in his statement that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” He also wrote that “should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”

Continue Reading

Trending