Connect with us

Technology

Fake PayPal email let hackers access computer and bank account

Published

on

Fake PayPal email let hackers access computer and bank account

Online scams are becoming more dangerous and convincing every day. Cybercriminals are finding new ways to gain not just your login credentials but full control of your computer and your bank accounts.

Take John from King George, Virginia, for example. He recently shared his alarming experience with us. His story is a powerful warning about how quickly things can escalate if you respond to suspicious emails.

Here is what happened to John in his own words: “I mistakenly responded to a false PayPal email notifying me of a laptop purchase. The message looked real, and I called the number listed. The person on the phone gave me a strange number to enter into my browser, which installed an app that took control of my PC. A warning popped up saying ‘software updating – do not turn off PC,’ and I could see my entire file system being scanned. The scammer accessed my bank account and transferred money between accounts. He told me to leave my PC running and go to the bank, keeping him on the phone without telling anyone what was happening. I shut everything down, contacted my bank, and changed my passwords.”

John’s quick thinking in shutting down his computer and alerting his bank helped minimize the damage. However, not everyone is as lucky.

Join the FREE “CyberGuy Report”: Get my expert tech tips, critical security alerts and exclusive deals, plus instant access to my free “Ultimate Scam Survival Guide” when you sign up!

Advertisement

Fake PayPal scam email (Kurt “CyberGuy” Knutsson)

How this scam works

This type of scam is known as a remote access scam. It often begins with a fake email that appears to come from a trusted company like PayPal. The message claims there is an issue, such as an unauthorized charge, and urges the victim to call a phone number or click a link. 

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Once the scammer makes contact, they guide the victim to enter a code into their browser or install a program, claiming it will fix the problem. In reality, this grants the scammer full control of the victim’s computer. 

Once inside, scammers often search for sensitive files, access banking websites, steal login credentials or install malware to maintain long-term access. Even if the immediate scam is stopped, hidden malware can allow scammers to reenter the system later.

Advertisement

A hacker at work (Kurt “CyberGuy” Knutsson)

THE URGENT PAYPAL EMAIL SCAM YOU CAN’T AFFORD TO IGNORE

Key takeaways from John’s experience

John’s close call highlights several important lessons.

Fake emails are harder to spot than ever: Scammers create emails that look almost identical to real ones from trusted companies like PayPal. They copy logos, formatting and even fake customer support numbers. Always double-check the sender’s email address and verify communications by visiting the official website or app directly instead of clicking links inside emails.

Remote access scams can escalate fast: Once scammers gain control of your device, they can steal sensitive data, move funds between accounts and install hidden malware that stays behind even after the scammer disconnects. It often takes only minutes for serious damage to be done, making fast recognition critical.

Advertisement

Psychological pressure plays a big role: Scammers rely on creating a sense of urgency and fear. By keeping you on the phone and urging secrecy, they isolate you from help and rush you into making bad decisions. Recognizing when you are being pressured is key to breaking the scammer’s control.

Fast action can make all the difference: By quickly disconnecting his computer and contacting his bank, John limited the scammer’s access to his accounts. Acting within minutes rather than hours can stop further theft, block fraudulent transactions and protect your sensitive information from being fully compromised.

A warning on a laptop home screen (Kurt “CyberGuy” Knutsson)

DON’T CLICK THAT LINK! HOW TO SPOT AND PREVENT PHISHING ATTACKS IN YOUR INBOX

Advertisement

How to protect yourself from remote access scams

Taking simple but strong security steps can protect you from falling victim.

1. Never call a number listed in a suspicious email: Scammers often set up fake phone numbers that sound professional but are designed to manipulate you into handing over control or information. Always find verified contact information through a company’s official website or app, not links/numbers provided in suspicious messages.

2. Be skeptical of unusual instructions: No legitimate company will ask you to install software or enter strange codes to protect your account. If anything seems unusual, trust your instincts and stop the communication immediately.

3. Install strong antivirus software on all devices: Antivirus programs can detect suspicious downloads, block remote access attempts and help prevent hackers from taking over your system. Having strong antivirus protection installed across all your devices is the best way to safeguard yourself from malicious links that install malware and attempt to access your private information. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.

4. Use identity theft protection: These services monitor financial accounts, credit reports and online activity for signs of fraud, alerting you to suspicious transactions. See my tips and best picks on how to protect yourself from identity theft.

Advertisement

5. React immediately if you suspect a scam: Disconnect your device from the internet, contact your bank or credit card company immediately and change your passwords, especially for banking and email accounts. Monitor your accounts closely for unauthorized activity and report the scam to the Federal Trade Commission as well as the company that was impersonated. Acting quickly can prevent further access and limit the damage scammers can cause.

6. Use multifactor authentication (MFA): MFA adds a critical layer of security beyond passwords, blocking unauthorized logins even if credentials are stolen. Enable MFA on all accounts, especially banking, email and payment platforms, to stop scammers from bypassing stolen passwords.

7. Update devices and software immediately: Regular updates patch security flaws that scammers exploit to install malware or hijack systems. Turn on automatic updates wherever possible to ensure you’re always protected against newly discovered vulnerabilities.

8. Employ a password manager with strong, unique passwords: Avoid password reuse and use complex passphrases to minimize credential-stuffing attacks. A password manager generates and stores uncrackable passwords, eliminating the risk of weak or repeated credentials. Get more details about my best expert-reviewed password managers of 2025 here.

9. Never share screen access or grant remote control: Scammers exploit screen-sharing tools to steal passwords and manipulate transactions in real time. Legitimate tech support will never demand unsolicited screen access; terminate the call immediately if pressured.

Advertisement

10. Invest in personal data removal services: These services automate requests to delete your personal information from data brokers and people-search sites, reducing publicly available details scammers could exploit for phishing or impersonation. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here.

PROTECT YOURSELF FROM TECH SUPPORT SCAMS

Kurt’s key takeaways

John’s story is a reminder that online scams are evolving quickly and becoming more aggressive. Staying skeptical, verifying all suspicious messages and acting quickly if something feels wrong can make the difference between staying safe and losing sensitive information. Protect your devices, trust your instincts and remember it is always better to be cautious than to take a risk with your security.

Have you or someone you know been targeted by a scam like this? Let us know by writing us at Cyberguy.com/Contact.

Advertisement

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Ask Kurt a question or let us know what stories you’d like us to cover.

Follow Kurt on his social channels:

Answers to the most-asked CyberGuy questions:

New from Kurt:

Advertisement

Copyright 2025 CyberGuy.com. All rights reserved.

Technology

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Published

on

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya

Meta’s AI-powered smart glasses could be sending sensitive footage to human reviewers in Nairobi, Kenya, according to an investigation by the Swedish outlets Svenska Dagbladet and Göteborgs-Posten. The report, which was published last week, claims Meta contractors in Kenya have seen videos captured with the smart glasses that show “bathroom visits, sex and other intimate moments.”

So far, at least one proposed class action lawsuit accusing Meta of violating false advertising and privacy laws has emerged in response to Svenska Dagbladet’s reporting, citing the company’s claim that its smart glasses are designed for privacy:

By affirmatively claiming that the Glasses were designed to protect privacy, Meta assumed a duty to disclose material facts that would inform a reasonable consumer’s decision to purchase the product. Instead, Meta hid the alarming reality: that use of the AI features results in a stranger halfway around the world watching the most private moments of a person’s life.

The Nairobi-based contractors interviewed by Svenska Dagbladet are AI annotators, meaning they label images, text, or audio, with the goal of helping AI systems make sense of the data they’re training on. “We see everything — from living rooms to naked bodies,” one worker says, according to Svenska Dagbladet. “Meta has that type of content in its databases.”

A former Meta employee reportedly tells Svenska Dagbladet that faces in annotation data are blurred automatically, though workers in Kenya say this “does not always work as intended,” and some faces are still visible. Another person reportedly tells the outlet that a wearer’s bank cards are sometimes seen in the footage they review as well.

Meta’s Ray-Ban and Oakley smart glasses come with a built-in AI assistant capable of answering questions about what a user can see. The glasses have soared in popularity in recent years, despite growing concerns over privacy and surveillance.

Advertisement

EssilorLuxottica, the eyewear giant that Meta works with to develop the camera-equipped glasses, sold over 7 million of the AI-powered glasses in 2025 — more than tripling its sales in 2023 and 2024 combined. Last year, Meta made some changes to its privacy policy that keep Meta AI with camera use enabled on your glasses “unless you turn off ‘Hey Meta.’” It also stopped allowing wearers to opt out of storing their voice recordings in the cloud.

As reported by Svenska Dagbladet, the Kenya-based AI reviewers work with transcriptions as well, ensuring Meta AI provides the correct answer to the questions users ask aloud. In a statement to The Verge, Meta spokesperson Tracy Clayton says media captured by its smart glasses “stays on the user’s device” unless they choose to share it with other people or Meta.

“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do,” Clayton says. “We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”

Continue Reading

Technology

Inside Microsoft’s AI content verification plan

Published

on

Inside Microsoft’s AI content verification plan

NEWYou can now listen to Fox News articles!

Scroll your social media feed for five minutes. You will likely see something that looks real but feels slightly off.

Maybe it is a viral protest image that turns out to be altered. Maybe it is a slick video pushing a political narrative. Or maybe it is an artificial intelligence voice clip that spreads before anyone stops to question it.

AI-enabled deception now permeates everyday life. And Microsoft says it has a technical blueprint to help verify where online content comes from and whether it has been altered.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

Microsoft’s proposal would attach digital fingerprints and metadata to help trace where online content originated. (YorVen/Getty Images)

Why AI-generated content feels more convincing today

AI tools can now generate hyperrealistic images, clone voices and create interactive deepfakes that respond in real time. What once required a studio or intelligence agency now requires a browser window. That shift changes the stakes.

It is no longer about spotting obvious fakes. It is about navigating a digital world where manipulated content blends into your daily scroll. Even when viewers know something is AI-generated, they often engage with it anyway. Labels alone do not automatically stop belief or sharing. So Microsoft is proposing something more structured.

How Microsoft’s AI content verification system works

To understand Microsoft’s approach, picture the process of authenticating a famous painting. An owner would carefully document its history and record every change in possession. Experts might add a watermark that machines can detect, but viewers cannot see. They could also generate a mathematical signature based on the brush strokes.

Now Microsoft wants to bring that same discipline to digital content. The company’s research team evaluated 60 different tool combinations, including metadata tracking, invisible watermarks and cryptographic signatures. Researchers also stress-tested those systems against real-world scenarios such as stripped metadata, subtle pixel changes or deliberate tampering.

Advertisement

Rather than deciding what is true, the system focuses on origin and alteration. It is designed to show where the content started and whether someone changed it along the way.

What AI content verification can and cannot prove

Before relying on these tools, you need to understand their limits. Verification systems can flag whether someone altered content, but they cannot judge accuracy or interpret context. They also cannot determine meaning. For example, a label may indicate that a video contains AI-generated elements. It will not explain whether the broader narrative is misleading.

Even so, experts believe widespread adoption could reduce deception at scale. Highly skilled actors and some governments may still find ways around safeguards. However, consistent verification standards could reduce a significant share of manipulated posts. Over time, that shift could reshape the online environment in measurable ways.

Why AI labels create a business dilemma for social platforms

Here is where the tension becomes real. Platforms depend on engagement. Engagement often feeds on outrage or shock. And AI-generated content can drive both. If clear AI labels reduce clicks, shares or watch time, companies face a difficult choice. Transparency can clash with business incentives.

FAKE ERROR POPUPS ARE SPREADING MALWARE FAST

Advertisement

Invisible watermarks and cryptographic signatures could signal when images or videos have been altered. (Chona Kasinger/Bloomberg via Getty Images)

Audits of major platforms already show inconsistent labeling of AI-generated posts. Some receive tags. Many slip through without disclosure.

Now, U.S. regulations are stepping in. California’s AI Transparency Act is set to require clearer disclosure of AI-generated material, and other states are considering similar rules. Lawmakers want stronger safeguards.

Still, implementation matters. If companies rush verification tools or apply them inconsistently, public trust could erode even faster.

The risk of incorrect AI labels and false flags

Researchers also warn about sociotechnical attacks. Imagine someone takes a real photo of a tense political event and modifies only a small portion of it. A weak detection system flags the entire image as AI-manipulated.

Advertisement

Now, a genuine image is treated as suspect. Bad actors could exploit imperfect systems to discredit real evidence. That is why Microsoft’s research stresses combining provenance tracking with watermarking and cryptographic signatures. Precision matters. Overreach could undermine the entire effort.

How to protect yourself from AI-generated misinformation

While industry standards evolve, you still need personal safeguards.

1) Slow down before sharing

If a post triggers a strong emotional reaction, pause. Emotional manipulation is often intentional.

2) Check the original source

Look beyond reposts and screenshots. Find the first publication or account.

3) Cross-check major claims

Search for coverage from reputable outlets before accepting dramatic narratives.

Advertisement

4) Verify suspicious images and videos

Use reverse image search tools to see where a photo first appeared. If the earliest version looks different, someone may have altered it.

5) Be skeptical of shocking voice recordings

AI tools can clone voices using short samples. If a recording makes explosive claims, wait for confirmation from trusted outlets.

6) Avoid relying on a single feed

Algorithms show you more of what you already engage with. Broader sources reduce the risk of getting trapped in manipulated narratives.

7) Treat labels as signals, not verdicts

An AI-generated tag offers context. It does not automatically make content harmful or false.

8) Keep devices and software updated

Malicious AI content sometimes links to phishing sites or malware. Updated systems reduce exposure.

Advertisement

Strengthen account security

Use strong, unique passwords and a reputable password manager to generate and store complex logins for you. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com. Also, enable multi-factor authentication where available. No system is perfect. But layered awareness makes you a harder target.

Experts say stronger AI labeling standards may reduce deception, but they cannot determine what is true. (iStock)

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Kurt’s key takeaways

Microsoft’s AI content verification plan signals that the industry understands the urgency. The internet is shifting from a place where we question sources to a place where we question reality itself. Technical standards could reduce manipulation at scale. But they cannot fix human psychology. People often believe what aligns with their worldview, even when labels suggest caution. Verification may help restore some trust online. Yet trust is not built by code alone.

So here is the question. If every post in your feed came with a digital fingerprint and an AI label, would that actually change what you believe?  Let us know by writing to us at Cyberguy.com.

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com. All rights reserved.

Related Article

Why the Microsoft 365 Copilot bug matters for data security
Continue Reading

Technology

Did Live Nation punish a venue by taking Billie Eilish away?

Published

on

Did Live Nation punish a venue by taking Billie Eilish away?

John Abbamondi had orders to let the CEO of Ticketmaster down easy.

In April 2021, Abbamondi was the CEO of BSE Global, the company that ran Brooklyn arena the Barclays Center. BSE Global’s existing Ticketmaster contract would expire at the end of September, and Abbamondi and his team had evaluated proposals from SeatGeek, AXS, and Ticketmaster. The economics of Ticketmaster offer, according to Abbamondi, “was nowhere near as good as the other two.” SeatGeek’s technology was “superior” to Ticketmaster’s on balance, on top of better financial terms including an equity stake in the company, the arena decided. It clinched their decision to go with a newer, smaller player in the field.

When Abbamondi called to break the news to Michael Rapino, the Live Nation Entertainment CEO, the meeting became tense — and a recording of it came back to haunt Rapino in this month’s Live Nation-Ticketmaster monopoly trial. Abbamondi was one of two witnesses who took the stand Wednesday, alongside Mitch Helgerson, the chief revenue officer for the Minnesota Wild hockey team. Both men said that when they considered switching their venues’ ticketing platform from Ticketmaster, executives there threatened them with the loss of vital Live Nation-promoted concerts. It’s the behavior, the Justice Department and 40 state and district attorneys general say, of a monopolist — a charge Live Nation-Ticketmaster denies.

Abbamondi, identifying the voices on the 2021 call to a Manhattan jury Wednesday, said that “the nervous guy was me and the angry guy was Michael.” The few minutes played in court captures an exchange that went “sideways,” as Abbamondi put it, when he tried to thread a delicate needle: rejecting Ticketmaster’s services while trying to hold its parent company Live Nation to a separate contract promising to fill Barclays Center with concerts. At one point, Rapino dropped an F-bomb while discussing his frustration over a contractual dispute. He told Abbamondi he believed they were never planning to renew with Ticketmaster in the first place.

Rapino reminded Abbamondi about the new UBS Arena in Queens, which could draw more Live Nation-promoted shows away from Barclays. Though Ticketmaster theoretically operates separately from Live Nation, Abbamondi took this as a “not-so-veiled” threat — cut off the left arm, and the right arm would swing back. Abbamondi hung up feeling like he’d failed to “do my job there, which was to land the plane smoothly.”

Advertisement

The venue “saw a dramatic decline in Live Nation shows that were booked at the arena”

Abbamondi still signed the deal with SeatGeek, which began in October 2021. Then, he testified, the venue “saw a dramatic decline in Live Nation shows that were booked at the arena.” Artists were just beginning to fill stadiums again after the start of the covid pandemic, including Billie Eilish, who’d had to cancel shows in New York venues including Barclays in 2020. Normally, Abbamondi would have expected Live Nation to rebook her show there next time she was on tour. But when she began touring again in 2021, she booked at the new venue Rapino had warned about — the UBS Arena. When Barclays asked about it, they were told it was the “artist’s decision.” Other promoters, he said, hadn’t reduced their bookings at Barclays by nearly as much.

In 2022, mere months into the SeatGeek contract, Abbamondi was fired. Less than a year later, Barclays announced it was going back to Ticketmaster.

Ticketmaster, in the witnesses’ telling, wasn’t the best option for a ticketing vendor, but Live Nation’s power as a concert promoter forced their hand. In the case of the Minnesota Wild, which played at the then-Xcel Energy Center in St. Paul, Helgerson said the fear of losing Live Nation shows was a large driver behind its decision to stick with Ticketmaster — even though it found it would make $1 million a year more switching to SeatGeek.

The arena was already engaged in tight competition for concerts with the Target Center across the river in Minneapolis, a similarly-sized venue. So when the Wild kicked off negotiations over renewing its contract with Ticketmaster in 2018, the ticketing service knew how to hit them where it would hurt. When the Wild staff mentioned they were planning to consider a proposal from SeatGeek too, a Ticketmaster executive told them that Live Nation could move all of their shows to the Target Center if they switched ticketing vendors, Helgerson testified. “We took it as a credible threat,” he said. “Losing those shows would be almost catastrophic to our organization.”

Advertisement

“We took it as a credible threat”

To ease the risk, SeatGeek offered what it called “Live Nation retaliation insurance” — a promise to compensate the arena for concerts booked at the Target Center on dates Xcel had open. SeatGeek offered the arena a higher upfront bonus and fee share that overall would make the venue an additional $1 million a year compared to Ticketmaster’s offer. But even retaliation insurance couldn’t make up for the loss of the “vibrance of the venue” and the impact on its own employees should Live Nation pull its shows. Ticketmaster’s alleged threat created an “insurmountable challenge.” The venue signed another contract with Ticketmaster.

There were complicating factors in both these cases, which Live Nation pointed out on cross-examination. It was both risky and a lot of work to move to a new ticketing platform. Like switching any enterprise software, it would take a while for staff to get up to speed, and Abbamondi admitted that while SeatGeek’s technology gave them more options over things like how to price individual seats, it was less user-friendly. An executive whom Helgerson worked with worried that SeatGeek’s lack of an interface for concert promoters at the time would be an obstacle to getting them to bring shows to the arena. Abbamondi also said he’s personal friends with SeatGeek’s co-founder, and he testified he wasn’t fired because of the SeatGeek deal — he was given two other reasons.

SeatGeek offered what it called “Live Nation retaliation insurance”

There was also a separate legal dispute between the Barclays Center and Ticketmaster, which appeared to be at least part of the reason that the call between Abbamondi and Rapino broke down. Barclays believed their contract with Ticketmaster would expire at the end of September 2021, as originally stated. But Ticketmaster believed that because the Covid pandemic shortened the regular NBA season, a clause in the contract had been triggered to extend that contract another year. On top of that, in an earlier, unrecorded call between Abbamondi and Rapino, the Ticketmaster CEO suggested that they should be given the chance to counter any offer Barclays received. Abbamondi said he tried his best to respond in a “noncommittal” way, but the implication was that Rapino might have seen it differently.

Advertisement

The jury will have to decide whether the threats Abbamondi and Helgerson described were really as menacing as they believe, one of many factors that will determine whether Live Nation-Ticketmaster should face penalties — including the possibility of a breakup.

In one text exchange, Live Nation executive Patti Kim, a friend of Abbamondi’s, wrote that he should “think about the bigger relationship” with Live Nation, not just who’s writing the bigger check. She added a winky face. “That was my friend saying, ‘you know what I mean,’” Abbamondi said. This week, the jury is expected to get the chance to hear from the rival allegedly offering those bigger checks: SeatGeek CEO Jack Groetzinger.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Continue Reading
Advertisement

Trending