Connect with us

Technology

Tech giants unite to fight online scams

Published

on

Tech giants unite to fight online scams

NEWYou can now listen to Fox News articles!

If you’ve ever gotten a suspicious text, a fake delivery alert or a message that felt just a little too convincing, you’ve already seen how fast scams are evolving. Now, some of the biggest names in tech and retail are scrambling to catch up.

Eleven major companies across those industries, including Google, Amazon, OpenAI, Adobe, Pinterest, LinkedIn, Match Group, Meta, Microsoft, Target and Levi Strauss & Co., have signed a new agreement to share information about scams and fraud.

At first glance, it sounds like a strong step forward. But this is more than a coordinated effort. It is a response to how modern scams actually work today.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter  

Advertisement

SCAMMERS USING AI MEET THEIR MATCH AS OPENAI, TECH INDUSTRY FIGHT BACK
 

A new industry agreement aims to block scam accounts, fake domains and fraud patterns before they spread across platforms. (Tristan Spinski for The Washington Post via Getty Images)

Why online scams are getting harder to stop

Scammers no longer operate in one place. They might find you on social media, move the conversation to a messaging app, then push you to send money through a fake website or payment service. It is all connected. That’s exactly what this new agreement, called the Industry Accord Against Online Scams & Fraud, is trying to address.

Instead of companies working in isolation, they are promising to share threat data in near real time. That includes things like scam accounts, fake domains and patterns tied to organized fraud. The idea is that if one company spots a scam early, others can block it before it spreads. 

What the companies are actually promising

This is not just about talking. The companies outlined a few concrete steps they plan to take:

Advertisement

Share intelligence faster

They will exchange information about scam networks, tactics and accounts across platforms and with law enforcement.

Use AI to detect scams earlier

Many companies already rely on AI to flag suspicious behavior. Now they want to expand those systems to catch scams faster and more accurately.

Add stronger verification

Expect tighter checks for financial transactions to confirm both sides are legitimate.

Improve reporting tools

Users should see clearer ways to report scams and get help.

Push governments to act

Companies are also calling for scam prevention to become a national priority in more countries.

Advertisement

That all sounds promising. But there is a catch.

The biggest limitation you should know

This agreement is voluntary. There are no penalties if companies fail to follow through. That means success depends entirely on how seriously each company takes it.

Still, even a loose collaboration could make a difference. Scammers thrive in gaps between platforms. Closing those gaps, even partially, could slow them down.

YOUTUBE JOB SCAM TEXT: HOW TO SPOT IT FAST
 

Big Tech and retail leaders are promising faster scam detection, stronger verification and better reporting tools for consumers. (Halfpoint/Getty Images)

Advertisement

How AI is making online scams more dangerous

This push comes as scams are becoming more sophisticated and harder to detect. AI is a big reason why. Scammers can now:

At the same time, companies are using AI to fight back. Google alone blocks hundreds of millions of scam-related results daily, while Meta has removed massive numbers of scam ads using automated systems. It’s essentially an arms race.

What this means for your online safety

In theory, this agreement could lead to fewer scams slipping through the cracks.

You might start to notice:

  • Faster removal of scam accounts
  • More warnings when something looks suspicious
  • Fewer fake ads or impersonation attempts

But this won’t eliminate scams entirely. Criminal networks are global, coordinated and constantly adapting. So while companies are stepping up, your own awareness still matters.

Cybersecurity expert warns scams are evolving fast

To understand what this really means in practice, it helps to hear from people who track these threats every day. Trend Micro, a global cybersecurity company, says this kind of collaboration is long overdue.

Advertisement

Trend Micro’s VP of Consumer Marketing and Education, Lynette Owens, believes cross-industry coordination is a critical step forward as scams increasingly unfold across multiple platforms. She tells CyberGuy:

“It’s encouraging to see major platforms like Google, Meta and Amazon coming together to share intelligence and disrupt scam networks. Cross-industry collaboration has proven to be helpful in fighting other types of online harms and has been a fruitful counter-measure against scams and fraud in other countries. Anything that moves us more towards prevention is a win, as so much effort is currently directed at what happens after the harm is done. 

“But while it’s a useful step forward, it’s not a complete solution. Scammers are constantly evolving, using AI and multi-channel tactics to create more convincing, personalized attacks that are harder for people to recognize in the moment. 

“What consumers really need is intervention that alerts them where scams actually happen, with clear, timely signals that something isn’t right. In today’s environment, scams don’t come as a single message. They unfold over time and adapt faster than ever to changing consumer habits or platform best practices. Collaboration is an important piece of the puzzle, but the more tools consumers have at their fingertips to fight back, the better their chances at stopping a scam before any real damage is done.”

Her takeaway is clear. Collaboration helps, but it will not be enough on its own.

Advertisement

SPRING CLEAN YOUR DIGITAL FOOTPRINT: WHY RETIREES ARE SCAM TARGETS
 

Google, Amazon, Meta and other major brands are teaming up as AI-powered scams grow more convincing and harder to stop. (John Keeble/Getty Images)

How to protect yourself from online scams

Even as companies step up their defenses, there are still simple steps you can take right now to reduce your risk and stay one step ahead of scammers.

1) Avoid unknown links

Do not click links in unexpected texts, emails or messages. Instead, go directly to the official website by typing the address yourself.

2) Use strong security software

Install strong antivirus software to help detect malicious links, phishing attempts and suspicious apps before they cause harm. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

Advertisement

3) Turn on two-factor authentication

Enable two-factor authentication (2FA) on your accounts whenever possible. This adds an extra layer of protection even if your password is exposed.

4) Limit where your personal data appears

The more your personal information is available online, the easier it is for scammers to target you. Consider using a data removal service to reduce your exposure on data broker and people-search sites. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com 

5) Monitor your accounts regularly

Check your bank, credit card and online accounts often so you can catch suspicious activity early and act quickly.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com     

Advertisement

Kurt’s key takeaways

This new alliance signals a shift. Tech companies are starting to treat scams as a shared problem rather than isolated incidents. That’s a big step in the right direction. But whether it actually slows down scammers will depend on execution, not promises. Coordination helps, but enforcement and accountability matter just as much.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

If scams keep getting smarter, should tech companies be required to do more than just cooperate voluntarily?  Let us know by writing to us at Cyberguy.com

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

Copyright 2026 CyberGuy.com.  All rights reserved.

Advertisement

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Google makes it easy to deepfake yourself

Published

on

Google makes it easy to deepfake yourself

YouTube Shorts is rolling out a new AI-powered feature giving creators an easy way to realistically clone themselves on camera. The launch, hinted at earlier this year, reflects the platform’s fraught relationship with AI-generated content, adding more generative features while struggling to contain AI slop, deepfake scams, and impersonations.

YouTube says the new tool will let users create a digital version of themselves, called an avatar, that can be inserted into existing Shorts videos or used to generate entirely new ones. The company said avatars will “look and sound like you,” framing them as a safer and more secure way to use AI to create new content.

Creating an avatar is a bit more involved than simply pressing a button, but it sounds fairly straightforward. In a blog post outlining the process, YouTube said users must first record a “live selfie” capturing their face and voice while following a series of prompts. For the best results, the company recommends good lighting, a quiet area, a background free of other people or images of faces, and holding the phone at eye level.

Once avatars are made, users can select “make a video with my avatar” while creating a video to generate a clip from prompts, which can be up to eight seconds long, according to 9to5google. Users can also add their avatar to “eligible Shorts” in their feed, though YouTube did not specify what makes a Short eligible.

The AI avatar feature comes with fairly tight restrictions. They can only be used in the creator’s own original videos, who also control whether their Shorts can be remixed. The creator can delete their avatar or videos where it appears at any time, YouTube says. Avatars that aren’t used to create new content for three years will be automatically deleted.

Advertisement

Not everyone will be able to use the feature immediately. YouTube says the tool “will be rolling out gradually,” though it did not give a timeline or indication of where it will be available first. Creators must also be at least 18 and own an existing YouTube channel, the company says.

Its arrival comes as one of Google’s main AI rivals, OpenAI, pulls back from video generation. The startup said it was sunsetting its Sora video tool last month after a year of struggling to get the wannabe social platform off the ground. It was costly and faced a parade of copyright challenges, deepfake controversies, and slop that made it an unattractive bet for investors ahead of an anticipated IPO this year.

Continue Reading

Technology

Apple Pay text scam almost cost her $15,000

Published

on

Apple Pay text scam almost cost her ,000

NEWYou can now listen to Fox News articles!

You see a charge you don’t recognize. It looks like it came from a trusted brand. Your instinct kicks in. You want to fix it quickly and move on. That’s exactly what happened to Dorothy.

After a simple text, she found herself on the phone with someone who sounded official, confident and completely convincing. Here’s how she described it:

“I received a text from APPLE Pay, which I don’t even use… It said an Apple Store in CA wants to charge me $144… If I have questions, I should call. DUH! I called and was speaking with the scammer.”

“I received a text from APPLE Pay, which I don’t even use… It said an Apple Store in CA wants to charge me $144… If I have questions, I should call. DUH! I called and was speaking with the scammer.”

— Dorothy

Advertisement

Within minutes, the situation escalated.

“He knew everything about me… He said I should take out $15,000… He said he was working with the FBI and the FDIC.”

That’s when the pressure really started. Dorothy told me this story when she joined me on my Beyond Connected podcast, and what happened next shows just how far these scams can go.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

10 WAYS TO PROTECT SENIORS FROM EMAIL SCAMS

Advertisement

The text sent to Dorothy shows how a fake Apple Pay alert uses urgency and a phone number to pull you into a scam. (Kurt “CyberGuy” Knutsson)

How this Apple Pay text scam actually works

This scam follows a pattern that is becoming more common. It combines a fake alert with a live phone call designed to build trust fast.

Here’s what is happening behind the scenes:

Step 1: The fake charge alert

You get a text about a suspicious charge. It looks urgent. It often includes a number to call.

Step 2: You call the scammer

The number connects you directly to a criminal. They pose as Apple, your bank or even law enforcement.

Advertisement

Step 3: They build credibility

They may know your name, address or bank. That information often comes from past data breaches.

Step 4: They create fear and urgency

You are told your money is at risk. You need to act immediately.

Step 5: They control your next move

In Dorothy’s case, the scammer told her to withdraw $15,000 and lie to her bank about why.

“He said he would stay on the phone with me while I drove to the bank… If anyone asked, I should say I was buying a car.”

That is a major red flag.

Advertisement

PHISHING SCAM EXPLOITS APPLE MAIL ‘TRUSTED SENDER’ LABEL

Once you call, scammers pose as trusted companies or agencies and pressure you to act quickly. (Kurt “CyberGuy” Knutsson)

The moment everything could have gone wrong

Dorothy drove to the bank with the scammer still on the phone. This is exactly what criminals want. They try to isolate you and keep control of the situation.

But something didn’t feel right.

“When I got to the bank, I recognized one of the employees and told her that I was uncomfortable… She said to hang up immediately.”

Advertisement

That decision changed everything.

The bank confirmed it was a scam. The calls kept coming from different numbers. Dorothy blocked them all. Fortunately, no money was lost.

Why the Apple Pay text scam feels so real

Scammers are getting better at one thing. They make you feel like you are solving a problem, not being scammed.

Here’s why this one works so well:

  • It uses a trusted name like Apple Pay
  • It creates urgency with a fake charge
  • It moves quickly to a live conversation
  • It uses real personal details to build trust
  • It pressures you to act before you think

They also add authority. Claiming ties to the FBI or FDIC makes people feel like they must comply. In reality, no legitimate agency will ever ask you to move money this way.

The biggest red flags to watch for

If you remember nothing else, remember these:

Advertisement
  • A text about a charge that tells you to call a number
  • Someone is asking you to withdraw large amounts of cash
  • Instructions to lie to your bank or keep a secret
  • Claims that your money needs to be “protected”
  • Pressure to act immediately

Each one is a warning sign. Together, they confirm it is a scam.

The biggest red flag is being told to move money or keep secrets from your bank or family. (Kurt “CyberGuy” Knutsson)

How to stay safe from Apple Pay text scams

You do not need to outsmart scammers. You just need to slow the situation down.

1) Never trust the number in the message

If you get a suspicious text, do not call the number provided. Look up the official number yourself.

2) Pause before you act

Scammers rely on urgency. Take a moment. Real companies will not rush you like this.

3) Never move money on someone else’s instructions

No bank, tech company or government agency will ask you to withdraw cash to “protect” it.

Advertisement

4) Use strong antivirus software

Strong antivirus software can help detect malicious links, block scam websites and warn you before you engage with risky content. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

5) Remove your personal data from the web

Scammers often use data from breaches to sound convincing. A data removal service can help reduce your exposure and limit what criminals can find about you online. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

6) Talk to someone you trust

A quick conversation with a friend, family member or bank employee can stop a scam cold.

7) Add extra protection

Consider identity monitoring services that alert you if your information is being misused. See my tips and best picks on Best Identity Theft Protection at Cyberguy.com.

What to do if this happens to you

Even if you did not lose money, take a few steps right away:

Advertisement
  • Contact your bank using the number on your card
  • Place a fraud alert on your credit
  • Consider freezing your credit
  • Monitor your accounts closely
  • Block any follow-up calls or texts

These steps help protect you from future attempts.

What this means for you

This scam did not begin with a complex hack. Instead, it started with a simple text. That is what makes it so dangerous. At first, it looks routine. Then urgency takes over. As a result, anyone can feel pressured to act quickly and without thinking.

In many cases, the situation feels real. That is how people get pulled into a conversation that seems legitimate. In Dorothy’s case, she trusted her instincts at the right moment. Because of that decision, fortunately, she did not lose $15,000.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt’s key takeaways

Scammers target more than technology. They focus on human behavior. They create pressure, build trust and keep you engaged long enough to make a mistake. However, you can break the cycle. A single pause can disrupt the scam. Asking one question can expose it. Even a quick conversation with someone you trust can stop it. If you’d like to hear more of Dorothy’s story, you can catch our full conversation on my Beyond Connected podcast at getbeyondconnected.com/

If you got a text like this right now, would you pause or would you call? Let us know by writing to us at Cyberguy.com.

Advertisement

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily. Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

OpenAI made economic proposals — here’s what DC thinks of them

Published

on

OpenAI made economic proposals — here’s what DC thinks of them

Happy ceasefire day and welcome to Regulator, a newsletter for Verge subscribers about Big Tech’s rocky journey through the world of politics. If you’re not a subscriber yet, you can do so here, but my only request is that you sign up before Donald Trump decides to revisit his previous threats toward Iran and kickstart World War III.

I’m back after being waylaid last week by the deadly combo of a moderate cold and the beginning of pollen season. (Twenty-one percent of the District’s acreage is taken up by public green space, and DC is consistently ranked the best city park system in America. Unfortunately, I am allergic to every tree and grass.) If you’ve got tips on anything I may have missed or anything I should know about the upcoming weeks, send ’em to tina.nguyen+tips@theverge.com.

Do you actually believe anything OpenAI says?

On Monday, OpenAI published a 13-page policy paper addressing the impact that artificial intelligence would have on the American workforce. The company also proposed what it believed was the solution: putting higher capital gains taxes on corporations replacing their workers with AI and using that money to create a bigger public safety net. Its solutions included a public wealth fund, a four-day workweek funded by “efficiency dividends,” and government programs to help transition workers into “human-centered” work, all financed by the abundance that artificial intelligence would deliver.

Unfortunately, it was released the day that The New Yorker’s Ronan Farrow and Andrew Marantz published a meticulously reported, 17,000-word-plus article chronicling Sam Altman’s history of lying to everyone around him, including to his Silicon Valley backers, his employees, his board, and — relevant in this case — lawmakers trying to regulate AI. The New Yorker article reinforced a long-standing narrative about Altman, and OpenAI by extension: They may spout idealistic values, but would quickly jettison them for financial and political gains.

Advertisement

On its own, said several people I spoke to, the paper was a net positive to AI governance overall, in that it introduced new ideas into the political discourse around the emerging technology. But unless the company’s policy and political influence made good on those promises, said OpenAI’s critics, it may as well just be a piece of paper.

“My guess is that there are people on the team who care about the stuff, who’ve thought really hard about this document and are proud of it, and did good work, even if it’s not addressing all of the questions that I wish it would address,” Malo Bourgon, the CEO of the Machine Intelligence Research Institute (MIRI), told me. “And there’s still the question of: Are those people gonna find themselves in the position that many previous people at OpenAI have found themselves in, where they thought the company had certain values or aligned with things they cared about, and then ended up finding out that wasn’t the case, becoming disenchanted and leaving?”

With OpenAI proposing policy, it’s worth looking back at its history with the government, which the New Yorker piece details in depth. Altman had been one of the first major CEOs to publicly advocate for federal oversight for AI, going so far as to propose a federal agency to oversee advanced models in 2023 — but privately he worked to suppress the laws containing his own safety proposals. A state legislative aide in California accused OpenAI of engaging in “increasingly cunning, deceptive behavior” to kill a 2023 AI safety bill that it was publicly supporting. In 2025, the company subpoenaed supporters of a California state-level AI bill in an effort to, as one such supporter put it to The New Yorker, “basically scare them into shutting up.” And though Altman had once worked extensively with the Biden administration to build AI safety standards, the moment that Donald Trump became president, Altman successfully persuaded him to kill the initiatives he’d once advocated for.

Nathan Calvin, the general counsel at Encode, an AI policy nonprofit where he focuses on state legislative initiatives, had received one of those subpoenas. “What I’ve seen from their policy and government affairs engagement has just been abysmal,” he told me. While he believed that the team who’d written the OpenAI proposal, primarily from the technical safety research side, was acting with good intentions, he was still reserving judgment. “Will those folks remain engaged as we move from general policy principles towards the many other ways in which lobbying and government influence actually happens? Part of me is hopeful, but a lot of me is also quite skeptical about whether that will happen.” (OpenAI did not return a request for comment.)

A modest, absolutely not craven request:

Advertisement

Next week I plan on running an issue of Regulator cataloging the nerdiest events happening during Nerd Prom, aka the White House Correspondents’ Dinner party circuit. If you’re a tech founder, tech company, or someone that does something related to technology and you’re throwing an event during WHCD week, please let me know what you’re up to! From what I’ve heard so far, the tech world is about to shake up the normal social dynamics of the week — I’ve already caught wind of the Grindr party in Georgetown, and the Substack party, which famed looksmaxxer Clavicular is attending — and I’m so, so excited to pull together the most bonkers “SPOTTED” column that Washington’s ever experienced.

(Again, this is contingent upon whether we’re at war with Iran by the end of April, in which case, I imagine no one will be up for frivolity.)

Speaking of DC reporters, this is very true of all of us:

Screenshot via @jakewilkns/X.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Advertisement

Advertisement
Continue Reading

Trending