Technology
Inside Microsoft’s AI content verification plan
NEWYou can now listen to Fox News articles!
Scroll your social media feed for five minutes. You will likely see something that looks real but feels slightly off.
Maybe it is a viral protest image that turns out to be altered. Maybe it is a slick video pushing a political narrative. Or maybe it is an artificial intelligence voice clip that spreads before anyone stops to question it.
AI-enabled deception now permeates everyday life. And Microsoft says it has a technical blueprint to help verify where online content comes from and whether it has been altered.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Microsoft’s proposal would attach digital fingerprints and metadata to help trace where online content originated. (YorVen/Getty Images)
Why AI-generated content feels more convincing today
AI tools can now generate hyperrealistic images, clone voices and create interactive deepfakes that respond in real time. What once required a studio or intelligence agency now requires a browser window. That shift changes the stakes.
It is no longer about spotting obvious fakes. It is about navigating a digital world where manipulated content blends into your daily scroll. Even when viewers know something is AI-generated, they often engage with it anyway. Labels alone do not automatically stop belief or sharing. So Microsoft is proposing something more structured.
How Microsoft’s AI content verification system works
To understand Microsoft’s approach, picture the process of authenticating a famous painting. An owner would carefully document its history and record every change in possession. Experts might add a watermark that machines can detect, but viewers cannot see. They could also generate a mathematical signature based on the brush strokes.
Now Microsoft wants to bring that same discipline to digital content. The company’s research team evaluated 60 different tool combinations, including metadata tracking, invisible watermarks and cryptographic signatures. Researchers also stress-tested those systems against real-world scenarios such as stripped metadata, subtle pixel changes or deliberate tampering.
Rather than deciding what is true, the system focuses on origin and alteration. It is designed to show where the content started and whether someone changed it along the way.
What AI content verification can and cannot prove
Before relying on these tools, you need to understand their limits. Verification systems can flag whether someone altered content, but they cannot judge accuracy or interpret context. They also cannot determine meaning. For example, a label may indicate that a video contains AI-generated elements. It will not explain whether the broader narrative is misleading.
Even so, experts believe widespread adoption could reduce deception at scale. Highly skilled actors and some governments may still find ways around safeguards. However, consistent verification standards could reduce a significant share of manipulated posts. Over time, that shift could reshape the online environment in measurable ways.
Why AI labels create a business dilemma for social platforms
Here is where the tension becomes real. Platforms depend on engagement. Engagement often feeds on outrage or shock. And AI-generated content can drive both. If clear AI labels reduce clicks, shares or watch time, companies face a difficult choice. Transparency can clash with business incentives.
FAKE ERROR POPUPS ARE SPREADING MALWARE FAST
Invisible watermarks and cryptographic signatures could signal when images or videos have been altered. (Chona Kasinger/Bloomberg via Getty Images)
Audits of major platforms already show inconsistent labeling of AI-generated posts. Some receive tags. Many slip through without disclosure.
Now, U.S. regulations are stepping in. California’s AI Transparency Act is set to require clearer disclosure of AI-generated material, and other states are considering similar rules. Lawmakers want stronger safeguards.
Still, implementation matters. If companies rush verification tools or apply them inconsistently, public trust could erode even faster.
The risk of incorrect AI labels and false flags
Researchers also warn about sociotechnical attacks. Imagine someone takes a real photo of a tense political event and modifies only a small portion of it. A weak detection system flags the entire image as AI-manipulated.
Now, a genuine image is treated as suspect. Bad actors could exploit imperfect systems to discredit real evidence. That is why Microsoft’s research stresses combining provenance tracking with watermarking and cryptographic signatures. Precision matters. Overreach could undermine the entire effort.
How to protect yourself from AI-generated misinformation
While industry standards evolve, you still need personal safeguards.
1) Slow down before sharing
If a post triggers a strong emotional reaction, pause. Emotional manipulation is often intentional.
2) Check the original source
Look beyond reposts and screenshots. Find the first publication or account.
3) Cross-check major claims
Search for coverage from reputable outlets before accepting dramatic narratives.
4) Verify suspicious images and videos
Use reverse image search tools to see where a photo first appeared. If the earliest version looks different, someone may have altered it.
5) Be skeptical of shocking voice recordings
AI tools can clone voices using short samples. If a recording makes explosive claims, wait for confirmation from trusted outlets.
6) Avoid relying on a single feed
Algorithms show you more of what you already engage with. Broader sources reduce the risk of getting trapped in manipulated narratives.
7) Treat labels as signals, not verdicts
An AI-generated tag offers context. It does not automatically make content harmful or false.
8) Keep devices and software updated
Malicious AI content sometimes links to phishing sites or malware. Updated systems reduce exposure.
Strengthen account security
Use strong, unique passwords and a reputable password manager to generate and store complex logins for you. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com. Also, enable multi-factor authentication where available. No system is perfect. But layered awareness makes you a harder target.
Experts say stronger AI labeling standards may reduce deception, but they cannot determine what is true. (iStock)
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
Microsoft’s AI content verification plan signals that the industry understands the urgency. The internet is shifting from a place where we question sources to a place where we question reality itself. Technical standards could reduce manipulation at scale. But they cannot fix human psychology. People often believe what aligns with their worldview, even when labels suggest caution. Verification may help restore some trust online. Yet trust is not built by code alone.
So here is the question. If every post in your feed came with a digital fingerprint and an AI label, would that actually change what you believe? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
A giant cell tower is going to space this weekend
This weekend’s scheduled Blue Origin rocket launch is rather momentous. Success would signal an end to SpaceX’s monopoly on reusable orbital launch vehicles, and set up a three-way race to make that “No Service” indicator on your phone disappear forever.
On Sunday morning, Jeff Bezos’ massive New Glenn rocket is scheduled to launch with the first-stage booster that launched and landed on the program’s second mission last November. It’s a critical test, because cost-effective booster reuse is what’s made SpaceX’s Falcon 9 so dominate.
Amazon desperately needs a reusable rocket of its own to accelerate its Leo launches. Without one, it’s only been able to launch 241 Leo satellites, putting it well behind schedule. In that same 12-month time period, SpaceX’s Falcon 9 rocket was able to deploy over 1,500 satellites to its Starlink constellation.
Sunday’s mission will carry AST SpaceMobile’s BlueBird 7 satellite to low Earth orbit. Instead of blanketing the region with thousands of small satellites like Amazon and SpaceX, AST’s plan is to deploy fewer satellites that are much more powerful. Bluebird 7 features a massive 2,400-square-foot phased-array antenna, making it the largest commercial communications array ever deployed in low Earth orbit. It’s essentially a cell tower in space, and will be the second of the company’s “Block 2” next-generation satellites to launch.
The BlueBird 7 is designed to provide 4G and 5G broadband, at speeds exceeding 120 Mbps, to the phones we already carry. AST plans to have 45 to 60 satellites launched by the end of 2026. When AST lights up its service sometime this year, it will be in direct competition with Starlink’s direct-to-cell service, already operating with T-Mobile in the US, and Globalstar, the satellite network snapped up by Amazon that keeps iPhones and Apple Watches communicating in dead zones.
Technology
New FBI warning reveals phishing attacks hitting private chats
Cyber expert shares tips to avoid AI phishing scams
Kurt ‘The CyberGuy’ Knutsson shares practical ways to avoid falling victim to AI-generated phishing scams and discusses a report that North Korean agents are posing as I.T. workers to funnel money into the country’s nuclear program.
NEWYou can now listen to Fox News articles!
You probably think your messages are safe. After all, apps like WhatsApp, Signal and Telegram promote strong encryption.
But a new warning from the Cybersecurity and Infrastructure Security Agency and the Federal Bureau of Investigation shows that attackers do not need to break encryption at all.
Instead, they are going after you.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
FBI WARNS ABOUT FOREIGN APPS AND YOUR DATA
A new federal advisory says phishing campaigns tied to Russian intelligence are going after messaging app users instead of trying to break encryption. (MStudioImages/Getty Images)
What the FBI and CISA just revealed
According to the joint advisory, cyber actors tied to Russian intelligence are running large-scale phishing campaigns targeting messaging apps.
These attacks are not random. They have focused on high-value targets like government officials, military personnel and journalists. However, the tactics can easily spread to everyday users.
Here is the key takeaway: Hackers are not cracking the apps themselves. They are tricking people into giving up access.
How these messaging app attacks actually work
This is where it gets interesting and a bit unsettling. Instead of breaking encryption, attackers use phishing to gain control of individual accounts. Once inside, they can:
- Read private conversations
- Access contact lists
- Send messages as if they were you
- Launch new scams targeting your contacts
It becomes a chain reaction. One compromised account can quickly lead to many more. In some cases, attackers impersonate trusted contacts. That makes the scam feel real and urgent.
Why encryption is not enough anymore
Encryption still matters. It protects messages as they travel between devices. But here is the problem. If someone logs into your account, they see everything just like you do.
That means even the most secure app cannot protect you if your login gets compromised. This is a shift in how cyberattacks work. The weakest link is no longer the technology. It is human behavior.
AI IS NOW POWERING CYBERATTACKS, MICROSOFT WARNS
The FBI and CISA are warning that attackers are targeting users of encrypted messaging apps by tricking them into handing over account access. (BackyardProduction/Getty Images)
Who is at risk from messaging app phishing attacks
While the advisory highlights high-profile targets, the tactics are not limited to them.
If you use messaging apps for:
- Personal conversations
- Work communication
- Sharing sensitive information
You are a potential target. Phishing works because it relies on simple mistakes. A quick tap on the wrong link is often all it takes.
What this means for you
This warning highlights a bigger trend. Cyberattacks are becoming more personal. Instead of attacking systems, hackers are targeting people directly. That makes awareness your strongest defense. The more you understand how these scams work, the harder it becomes for attackers to succeed.
Ways to stay safe from messaging app phishing attacks
You do not need to be a cybersecurity expert to protect yourself. You just need to slow things down and follow a few smart habits.
1) Be skeptical of unexpected messages
If a message feels urgent or out of place, pause. Even if it looks like it came from someone you know.
2) Never click suspicious links
Avoid links sent through messages unless you can verify them independently. Strong antivirus software can help detect suspicious behavior after a compromise. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
3) Turn on two-factor authentication
Two-factor authentication (2FA) adds a second layer of protection even if your password gets exposed.
TECH GIANTS UNITE TO FIGHT ONLINE SCAMS
Officials say hackers can read messages, access contacts and impersonate users once they gain control of a messaging app account. (FreshSplash/Getty Images)
4) Watch for login alerts
Many apps notify you when a new device signs in. Do not ignore these warnings.
5) Verify requests in another way
If a contact asks for something unusual, call them or confirm through another channel.
6) Use a data removal service
Limit how much of your personal information is available online. Data removal services work to delete your data from broker sites, making it harder for scammers to target you with convincing phishing messages. Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
7) Keep your device and apps updated
Install updates regularly. Security patches fix vulnerabilities that attackers can exploit after gaining access.
Kurt’s key takeaways
Messaging apps feel private. They feel secure. That sense of comfort is exactly what attackers are counting on. The technology is still strong. The real question is whether your habits are keeping up. So the next time a message pops up that feels slightly off, trust that instinct and take a second look.
Have you ever received a suspicious message that made you stop and question if it was real? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
YouTube’s mobile app finally lets you share timestamped videos
YouTube is making some changes that might affect how you share videos from the mobile app. From the app, you can finally share videos from a specific timestamp, which will make it easier to point someone to a part of a video you might want them to see while you’re on your phone. However, this change will replace the Clips feature that lets you make a shareable clip from a video.
You’ll still be able to watch any Clips that you’ve already made. But moving forward, “the ability to set an end time or include a custom description when sharing will no longer be available,” YouTube says. The company notes that while clipping is “important way for creators to reach new audiences,” it says that “a number of third-party tools with advanced clipping features and authorized creator programs are now available to do this across different video platforms.”
The company originally introduced the Clips feature in 2021.
-
Ohio3 days ago‘Little Rascals’ star Bug Hall arrested in Ohio
-
Arkansas7 days agoArkansas TV meteorologist Melinda Mayo retires after nearly four decades on air
-
Austin, TX1 week agoABC Kite Fest Returns to Austin for Annual Celebration – Austin Today
-
Politics3 days agoDem fundraising giant in the hot seat as GOP lawmakers demand answers over dodged subpoena
-
Politics6 days agoTrump blasts Spanberger ahead of Virginia meetings, says state faces tax base exodus like New York, California
-
Health1 week agoWoman discovers missing nose ring traveled to her lungs, causing month-long cough
-
San Francisco, CA5 days agoPresident Trump terminates Presidio Trust
-
Detroit, MI1 week agoByron Allen’s “Comics Unleashed” replacing Colbert’s “Late Show”