Connect with us

Technology

AI is now powering cyberattacks, Microsoft warns

Published

on

AI is now powering cyberattacks, Microsoft warns

NEWYou can now listen to Fox News articles!

Artificial intelligence promised to make life easier. Write emails faster. Build software quicker. Analyze huge datasets in seconds. Unfortunately, cybercriminals noticed those benefits too.

A new report from Microsoft Threat Intelligence reveals that attackers are now using AI across nearly every stage of a cyberattack. The technology helps them move faster, scale operations and lower the technical skill required to launch attacks. In simple terms, AI has become a powerful assistant for hackers.

Instead of replacing cybercriminals, it gives them tools that make their work easier.

Sign up for my FREE CyberGuy Report

Advertisement
  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

5 MYTHS ABOUT IDENTITY THEFT THAT PUT YOUR DATA AT RISK
 

Artificial intelligence is helping hackers write phishing emails, build malware and move faster through cyberattacks, according to Microsoft Threat Intelligence. (shapecharge/Getty Images)

How hackers are using AI today

Cyberattacks usually involve many steps. Attackers scout victims, craft phishing messages, build infrastructure and write malicious code. According to Microsoft researchers, generative AI tools now help speed up many of those tasks.

Attackers are using AI to:

  • Write convincing phishing emails
  • Translate scam messages into different languages
  • Summarize stolen data
  • Generate or debug malware code
  • Build scripts and infrastructure for attacks

AI also helps threat actors move more quickly between stages of an attack. Tasks that once took hours or days may now take minutes. Microsoft describes AI as a “force multiplier” that reduces friction for attackers while humans remain in control of targets and strategy.

Nation-state hackers are already experimenting with AI

Some of the most advanced cyber groups are already experimenting with artificial intelligence. Microsoft says North Korean hacking groups known as Jasper Sleet and Coral Sleet have incorporated AI into their operations.

One tactic involves fake remote workers. Attackers generate realistic identities, resumes and communications using AI. They apply for jobs at Western companies and gain legitimate access to internal systems once hired.

Advertisement

In some cases, AI even helps generate culturally appropriate names or email formats that match specific identities. For example, attackers may prompt AI tools to produce lists of names or create realistic email address formats for a fake employee profile. Once inside a company, that access can become extremely valuable.

HOW TO OPT OUT OF AI DATA COLLECTION IN POPULAR APPS
 

As AI lowers the barrier to cybercrime, security experts say strong passwords, software updates and multi-factor authentication matter more than ever. (yasindmrblk/Getty Images)

AI can help build malware and attack infrastructure

Researchers also observed threat actors using AI coding tools to assist with malware development.

Generative AI can help attackers:

Advertisement
  • Write malicious scripts
  • Fix coding errors
  • Convert malware into different programming languages

In some experiments, malware appeared capable of dynamically generating scripts or changing behavior while running. Meanwhile, attackers can use AI to build phishing websites or attack infrastructure more quickly. Microsoft also observed groups using AI to generate fake company websites that support social engineering campaigns.

Hackers are trying to bypass AI safety rules

AI companies have placed guardrails on their systems to prevent misuse. However, attackers are already experimenting with ways to bypass those safeguards. One tactic is called jailbreaking. It involves manipulating prompts so that an AI system generates content it would normally refuse to produce. Researchers are also watching early experiments with agentic AI, which can perform tasks autonomously and adapt to results.

For now, Microsoft says AI mainly assists human operators rather than running attacks on its own. Still, the technology is evolving quickly.

Why AI is lowering the barrier for cybercrime

One of the biggest concerns in the Microsoft report is accessibility. Years ago, launching sophisticated cyberattacks required advanced technical skills. AI tools now help automate parts of that process. Someone with limited programming knowledge can ask AI to generate scripts, troubleshoot code or translate scams into multiple languages.

That shift could expand the number of people capable of launching cyberattacks. At the same time, AI also gives defenders new tools for detecting threats. Security teams are now using AI to analyze behavior, detect anomalies and respond to attacks more quickly. The technology is fueling both sides of the cybersecurity arms race.

INSIDE MICROSOFT’S AI CONTENT VERIFICATION PLAN
 

Advertisement

Microsoft says cybercriminals are using AI as a force multiplier, making scams, malware and fake identities easier to create and deploy. (shapecharge/Getty Images)

How Microsoft is responding to AI-powered cyber threats

Microsoft says its security teams are working to detect and disrupt AI-enabled cybercrime as it emerges. The company uses threat intelligence systems to monitor attacker activity, identify new tactics and share findings with organizations around the world.

Microsoft also integrates AI into its own security tools to help detect suspicious behavior, phishing campaigns and unusual account activity faster. These systems analyze patterns across billions of signals each day to identify threats before they spread widely.

The company says organizations should strengthen identity protections, monitor unusual credential use and treat suspicious remote worker activity as a potential insider risk.

How to protect yourself from AI-powered cyberattacks

The rise of AI-powered cyberattacks can sound alarming. The good news is that many proven security habits still work. A few simple steps can dramatically reduce your risk.

Advertisement

1) Be cautious with unexpected messages

AI-generated phishing emails are becoming more convincing. Always verify requests for passwords, payments or sensitive information before clicking links or downloading files. Also, use strong antivirus protection on all your devices. Strong antivirus software can detect malware, block suspicious downloads and warn you about dangerous websites before they load. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

2) Use strong, unique passwords

A password manager can generate and store complex passwords for every account. This prevents attackers from accessing multiple accounts if one password is exposed. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

3) Turn on multi-factor authentication

Even if someone steals your password, multi-factor authentication adds a second layer of protection and can stop many account takeovers.

4) Keep devices and software updated

Security updates patch vulnerabilities that attackers often exploit. Turn on automatic updates whenever possible.

5) Remove personal data from public websites

Cybercriminals often gather personal information from data broker sites before launching scams. Using a data removal service can help reduce the amount of personal information attackers can find about you online.

Advertisement

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

6) Watch for unusual account activity

Unexpected login alerts, password reset messages, or unfamiliar devices connected to your accounts may signal a breach. Act quickly if something looks suspicious. 

Kurt’s key takeaways

Artificial intelligence is transforming almost every industry. Cybercrime is no exception. Hackers now use AI to craft phishing messages, build malware and scale attacks faster than ever before. The technology lowers technical barriers and speeds up operations while human attackers remain in control. Security experts expect the use of AI in cyberattacks to grow as tools become more powerful and widely available. That makes awareness and strong digital habits more important than ever. Because the next phishing email you receive may not have been written by a person at all.

If AI can now help hackers launch attacks faster and at a larger scale, are tech companies moving quickly enough to protect you? Let us know by writing to us at Cyberguy.com.

Advertisement

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report

  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement

Technology

A giant cell tower is going to space this weekend

Published

on

A giant cell tower is going to space this weekend

This weekend’s scheduled Blue Origin rocket launch is rather momentous. Success would signal an end to SpaceX’s monopoly on reusable orbital launch vehicles, and set up a three-way race to make that “No Service” indicator on your phone disappear forever.

On Sunday morning, Jeff Bezos’ massive New Glenn rocket is scheduled to launch with the first-stage booster that launched and landed on the program’s second mission last November. It’s a critical test, because cost-effective booster reuse is what’s made SpaceX’s Falcon 9 so dominate.

Amazon desperately needs a reusable rocket of its own to accelerate its Leo launches. Without one, it’s only been able to launch 241 Leo satellites, putting it well behind schedule. In that same 12-month time period, SpaceX’s Falcon 9 rocket was able to deploy over 1,500 satellites to its Starlink constellation.

Sunday’s mission will carry AST SpaceMobile’s BlueBird 7 satellite to low Earth orbit. Instead of blanketing the region with thousands of small satellites like Amazon and SpaceX, AST’s plan is to deploy fewer satellites that are much more powerful. Bluebird 7 features a massive 2,400-square-foot phased-array antenna, making it the largest commercial communications array ever deployed in low Earth orbit. It’s essentially a cell tower in space, and will be the second of the company’s “Block 2” next-generation satellites to launch.

The BlueBird 7 is designed to provide 4G and 5G broadband, at speeds exceeding 120 Mbps, to the phones we already carry. AST plans to have 45 to 60 satellites launched by the end of 2026. When AST lights up its service sometime this year, it will be in direct competition with Starlink’s direct-to-cell service, already operating with T-Mobile in the US, and Globalstar, the satellite network snapped up by Amazon that keeps iPhones and Apple Watches communicating in dead zones.

Advertisement
Continue Reading

Technology

New FBI warning reveals phishing attacks hitting private chats

Published

on

New FBI warning reveals phishing attacks hitting private chats

NEWYou can now listen to Fox News articles!

You probably think your messages are safe. After all, apps like WhatsApp, Signal and Telegram promote strong encryption.

Advertisement

But a new warning from the Cybersecurity and Infrastructure Security Agency and the Federal Bureau of Investigation shows that attackers do not need to break encryption at all.

Instead, they are going after you.

Sign up for my FREE CyberGuy Report

  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

FBI WARNS ABOUT FOREIGN APPS AND YOUR DATA
 

A new federal advisory says phishing campaigns tied to Russian intelligence are going after messaging app users instead of trying to break encryption. (MStudioImages/Getty Images)

What the FBI and CISA just revealed

According to the joint advisory, cyber actors tied to Russian intelligence are running large-scale phishing campaigns targeting messaging apps.

Advertisement

These attacks are not random. They have focused on high-value targets like government officials, military personnel and journalists. However, the tactics can easily spread to everyday users.

Here is the key takeaway: Hackers are not cracking the apps themselves. They are tricking people into giving up access. 

How these messaging app attacks actually work

This is where it gets interesting and a bit unsettling. Instead of breaking encryption, attackers use phishing to gain control of individual accounts. Once inside, they can:

  • Read private conversations
  • Access contact lists
  • Send messages as if they were you
  • Launch new scams targeting your contacts

It becomes a chain reaction. One compromised account can quickly lead to many more. In some cases, attackers impersonate trusted contacts. That makes the scam feel real and urgent.

Why encryption is not enough anymore

Encryption still matters. It protects messages as they travel between devices. But here is the problem. If someone logs into your account, they see everything just like you do.

That means even the most secure app cannot protect you if your login gets compromised. This is a shift in how cyberattacks work. The weakest link is no longer the technology. It is human behavior.

Advertisement

AI IS NOW POWERING CYBERATTACKS, MICROSOFT WARNS
 

The FBI and CISA are warning that attackers are targeting users of encrypted messaging apps by tricking them into handing over account access. (BackyardProduction/Getty Images)

Who is at risk from messaging app phishing attacks

While the advisory highlights high-profile targets, the tactics are not limited to them.

If you use messaging apps for:

  • Personal conversations
  • Work communication
  • Sharing sensitive information

You are a potential target. Phishing works because it relies on simple mistakes. A quick tap on the wrong link is often all it takes. 

What this means for you

This warning highlights a bigger trend. Cyberattacks are becoming more personal. Instead of attacking systems, hackers are targeting people directly. That makes awareness your strongest defense. The more you understand how these scams work, the harder it becomes for attackers to succeed.

Advertisement

Ways to stay safe from messaging app phishing attacks

You do not need to be a cybersecurity expert to protect yourself. You just need to slow things down and follow a few smart habits.

1) Be skeptical of unexpected messages

If a message feels urgent or out of place, pause. Even if it looks like it came from someone you know.

2) Never click suspicious links

Avoid links sent through messages unless you can verify them independently. Strong antivirus software can help detect suspicious behavior after a compromise. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.

3) Turn on two-factor authentication

Two-factor authentication (2FA) adds a second layer of protection even if your password gets exposed.

TECH GIANTS UNITE TO FIGHT ONLINE SCAMS
 

Advertisement

Officials say hackers can read messages, access contacts and impersonate users once they gain control of a messaging app account. (FreshSplash/Getty Images)

4) Watch for login alerts

Many apps notify you when a new device signs in. Do not ignore these warnings.

5) Verify requests in another way

If a contact asks for something unusual, call them or confirm through another channel.

6) Use a data removal service

Limit how much of your personal information is available online. Data removal services work to delete your data from broker sites, making it harder for scammers to target you with convincing phishing messages. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

7) Keep your device and apps updated

Install updates regularly. Security patches fix vulnerabilities that attackers can exploit after gaining access.

Advertisement

Kurt’s key takeaways

Messaging apps feel private. They feel secure. That sense of comfort is exactly what attackers are counting on. The technology is still strong. The real question is whether your habits are keeping up. So the next time a message pops up that feels slightly off, trust that instinct and take a second look.

Have you ever received a suspicious message that made you stop and question if it was real? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report

  • Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
  • For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
  • Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

YouTube’s mobile app finally lets you share timestamped videos

Published

on

YouTube’s mobile app finally lets you share timestamped videos

YouTube is making some changes that might affect how you share videos from the mobile app. From the app, you can finally share videos from a specific timestamp, which will make it easier to point someone to a part of a video you might want them to see while you’re on your phone. However, this change will replace the Clips feature that lets you make a shareable clip from a video.

You’ll still be able to watch any Clips that you’ve already made. But moving forward, “the ability to set an end time or include a custom description when sharing will no longer be available,” YouTube says. The company notes that while clipping is “important way for creators to reach new audiences,” it says that “a number of third-party tools with advanced clipping features and authorized creator programs are now available to do this across different video platforms.”

The company originally introduced the Clips feature in 2021.

Continue Reading
Advertisement

Trending