A video of Elizabeth Warren saying Republicans shouldn’t vote went viral in 2023. But it wasn’t Warren. That video of Ron DeSantis wasn’t the Florida governor, either. And nope, Pope Francis was not wearing a white Balenciaga coat.
Technology
Watermarking the future
Generative AI has made it easier to create deepfakes and spread them around the internet. One of the most common proposed solutions involves the idea of a watermark that would identify AI-generated content. The Biden administration has made a big deal out of watermarks as a policy solution, even specifically mandating tech companies to find ways to identify AI-generated content. The president’s executive order on AI, released in November, was built on commitments from AI developers to figure out a way to tag content as AI generated. And it’s not just coming from the White House — legislators, too, are looking at enshrining watermarking requirements as law.
Watermarking can’t be a panacea — for one thing, most systems simply don’t have the capacity to tag text the way it can tag visual media. Still, people are familiar enough with watermarks that the idea of watermarking an AI-generated image feels natural.
Pretty much everyone has seen a watermarked image. Getty Images, which distributes licensed photos taken at events, uses a watermark so ubiquitous and so recognizable that it is its own meta-meme. (In fact, the watermark is now the basis of Getty’s lawsuit against the AI-generation platform Midjourney, with Getty alleging that Midjourney must have taken its copyrighted content since it generates the Getty watermark in its output.) Of course, artists were signing their works long before digital media or even the rise of photography, in order to let people know who created the painting. But watermarking itself — according to A History of Graphic Design — began during the Middle Ages, when monks would change the thickness of the printing paper while it was wet and add their own mark. Digital watermarking rose in the ‘90s as digital content grew in popularity. Companies and governments began putting tags (hidden or otherwise) to make it easier to track ownership, copyright, and authenticity.
Watermarks will, as before, still denote who owns and created the media that people are looking at. But as a policy solution for the problem of deepfakes, this new wave of watermarks would, in essence, tag content as either AI or human generated. Adequate tagging from AI developers would, in theory, also show the provenance of AI-generated content, thus additionally addressing the question of whether copyrighted material was used in its creation.
Tech companies have taken the Biden directive and are slowly releasing their AI watermarking solutions. Watermarking may seem simple, but it has one significant weakness: a watermark pasted on top of an image or video can be easily removed via photo or video editing. The challenge becomes, then, to make a watermark that Photoshop cannot erase.
The challenge becomes, then, to make a watermark that Photoshop cannot erase.
Companies like Adobe and Microsoft — members of the industry group Coalition for Content Provenance and Authenticity, or C2PA — have adopted Content Credentials, a standard that adds features to images and videos of its provenance. Adobe has created a symbol for Content Credentials that gets embedded in the media; Microsoft has its own version as well. Content Credentials embeds certain metadata — like who made the image and what program was used to create it — into the media; ideally, people will be able to click or tap on the symbol to look at that metadata themselves. (Whether this symbol can consistently survive photo editing is yet to be proven.)
Meanwhile, Google has said it’s currently working on what it calls SynthID, a watermark that embeds itself into the pixels of an image. SynthID is invisible to the human eye, but still detectable via a tool. Digimarc, a software company that specializes in digital watermarking, also has its own AI watermarking feature; it adds a machine-readable symbol to an image that stores copyright and ownership information in its metadata.
All of these attempts at watermarking look to either make the watermark unnoticable by the human eye or punt the hard work over to machine-readable metadata. It’s no wonder: this approach is the most surefire way information can be stored without it being removed, and encourages people to look closer at the image’s provenance.
That’s all well and good if what you’re trying to build is a copyright detection system, but what does that mean for deepfakes, where the problem is that fallible human eyes are being deceived? Watermarking puts the burden on the consumer, relying on an individual’s sense that something isn’t right for information. But people generally do not make it a habit to check the provenance of anything they see online. Even if a deepfake is tagged with telltale metadata, people will still fall for it — we’ve seen countless times that when information gets fact-checked online, many people still refuse to believe the fact-checked information.
Experts feel a content tag is not enough to prevent disinformation from reaching consumers, so why would watermarking work against deepfakes?
The best thing you can say about watermarks, it seems, is that at least it’s anything at all. And due to the sheer scale of how much AI-generated content can be quickly and easily produced, a little friction goes a long way.
After all, there’s nothing wrong with the basic idea of watermarking. Visible watermarks signal authenticity and may encourage people to be more skeptical of media without it. And if a viewer does find themselves curious about authenticity, watermarks directly provide that information.
The best thing you can say about watermarks, it seems, is that at least it’s anything at all.
Watermarking can’t be a perfect solution for the reasons I’ve listed (and besides that, researchers have been able to break many of the watermarking systems out there). But it works in tandem with a growing wave of skepticism toward what people see online. I have to confess when I began writing this, I’d believed that it’s easy to fool people into believing really good DALL-E 3 or Midjourney photos were made by humans. However, I realized that discourse around AI art and deepfakes has seeped into the consciousness of many chronically online people. Instead of accepting magazine covers or Instagram posts as authentic, there’s now an undercurrent of doubt. Social media users regularly investigate and call out brands when they use AI. Look at how quickly internet sleuths called out the opening credits of Secret Invasion and the AI-generated posters in True Detective.
It’s still not an excellent strategy to rely on a person’s skepticism, curiosity, or willingness to find out if something is AI-generated. Watermarks can do good, but there has to be something better. People are more dubious of content, but we’re not fully there yet. Someday, we might find a solution that conveys something is made by AI without hoping the viewer wants to find out if it is.
For now, it’s best to learn to recognize if a video isn’t really of a politician.
Technology
Microsoft’s Edge Copilot update uses AI to pull information from across your tabs
Microsoft Edge is adding a new feature that will allow its Copilot AI chatbot to gather information from all of your open tabs. When you start a conversation with Copilot, you can ask the chatbot questions about what’s in your tabs, compare the products you’re looking at, summarize your open articles, and more.
In its announcement, Microsoft says you can “select which experiences you want or leave off the ones you don’t.” The company is retiring Copilot Mode as well, which could similarly draw information from your tabs but offered some agentic features, like the ability to book a reservation on your behalf. Microsoft has since folded these agentic capabilities into its “Browse with Copilot” tool.
Several other AI features are coming to Edge, including an AI-powered “Study and Learn” mode that can turn the article you’re looking at into a study session or interactive quiz. There’s a new tool that turns your tabs into AI-powered podcasts as well, similar to what you’d find on NotebookLM, and an AI writing assistant that will pop up when you start entering text on a webpage.
You can also give Copilot permission to access your browsing history to provide more “relevant, high-quality answers,” according to Microsoft. Copilot in Edge on desktop and mobile will come with “long-term memory” as well, which can tailor its responses based on your previous conversations. And, when you open up a new tab, you’ll see a redesigned page that combines chat, search, and web navigation, along with the Journeys feature, which uses AI to organize your browsing history into categories that you can revisit.
Meanwhile, an update to Edge’s mobile app will allow you to share your screen with Copilot and talk through the questions about what you’re seeing. Microsoft says you’ll see “clear visual cues” when Copilot is active, “so you know when it’s taking an action, helping, listening, or viewing.”
Technology
Apple’s $250M Siri settlement: Are you owed cash?
NEWYou can now listen to Fox News articles!
If you bought a newer iPhone because Apple made Siri sound like it was about to become your personal artificial intelligence sidekick, you may want to pay attention.
Apple has agreed to pay $250 million to settle a class-action lawsuit over claims that it misled customers about new Apple Intelligence and Siri features. The case centers on the iPhone 16 launch and certain iPhone 15 models that were marketed as ready for Apple’s next wave of AI. The settlement still needs court approval, and Apple denies wrongdoing.
The lawsuit argues that Apple promoted a smarter, more personal Siri before those features were actually available. For some buyers, that was a big deal. A new iPhone can cost hundreds of dollars, and many people upgrade only when they think they are getting something meaningfully new.
Sign up for my FREE CyberGuy Report
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
WHY IPHONE USERS ARE THE NEW PRIME SCAM TARGETS
U.S. buyers of certain iPhone 16 and iPhone 15 Pro models may qualify for payments if a judge approves Apple’s proposed settlement. (Getty Images)
What Apple is accused of promising
Apple introduced Apple Intelligence in June 2024 and promoted it as a major step forward for iPhone, iPad and Mac. A key part of that pitch was a more personalized Siri that could understand context, work across apps and help with everyday tasks in a more useful way.
The lawsuit claims Apple’s marketing made consumers believe those advanced Siri features would arrive with the iPhone 16 or soon after. Instead, buyers received phones that had some Apple Intelligence tools, but not the full Siri overhaul that many expected.
That gap is the heart of the case. Plaintiffs say customers bought or upgraded devices based on AI features that were not ready. Apple says it has rolled out many Apple Intelligence features and settled the case, so it can stay focused on its products.
How much money could iPhone owners get?
The proposed settlement creates a $250 million fund. Eligible customers who file approved claims are expected to receive at least $25 per eligible device. That amount could rise to as much as $95 per device, depending on how many people file claims and other settlement factors.
That means this will not be a huge payday for most people. Still, if you bought one of the covered phones, it may be worth watching for a claim notice. A few minutes of paperwork could put some money back in your pocket.
Which iPhones may qualify?
The proposed settlement covers U.S. buyers who purchased any iPhone 16 model, iPhone 15 Pro or iPhone 15 Pro Max between June 10, 2024, and March 29, 2025.
Covered iPhone 16 models include the iPhone 16, iPhone 16 Plus, iPhone 16 Pro, iPhone 16 Pro Max and iPhone 16e. The settlement also includes the iPhone 15 Pro and iPhone 15 Pro Max, but not every iPhone 15 model.
The key details are the device model, the purchase date and whether the phone was bought in the United States.
HOW YOU CAN GET A SLICE OF APPLE’S $250M IPHONE SETTLEMENT
Apple has agreed to pay $250 million to settle claims it misled customers about Apple Intelligence and Siri features on newer iPhones. (Michael Nagle/Bloomberg)
How will you file a claim?
You do not need to do anything immediately. The settlement still needs a judge’s approval. Once the claims process opens, eligible customers are expected to receive a notice by email or mail with instructions on how to file through a settlement website.
That notice matters because scammers love moments like this. A real settlement notice should not ask for your Apple ID password, bank login or payment to claim your money. If you receive a message about this settlement, do not click blindly. Go slowly, check the sender and look for the official settlement administrator details once they are available.
Why this case matters beyond one Siri feature
This case hits a bigger nerve. Tech companies are racing to sell AI as the next must-have feature. That creates a problem for shoppers. You are often asked to buy now based on what a company says will arrive later.
That can be frustrating when the feature is the reason you upgraded. A smarter Siri sounds useful. A phone that can understand your personal context, search across apps and help with daily tasks could save time. But if those tools are delayed, limited or missing, the value of the upgrade changes.
This settlement also sends a message about AI marketing. Companies can talk about future features, but consumers need clear timing and plain explanations. “Coming soon” can mean very different things when you are spending $800, $1,000 or more.
We reached out to Apple for comment, but did not hear back before our deadline.
FIRST 15 THINGS TO DO OR TRY FIRST WHEN YOU GET A NEW IPHONE
Apple denies wrongdoing but agreed to settle claims tied to its marketing of Apple Intelligence and Siri features. (Qilai Shen/Bloomberg)
What this means to you
If you bought a covered iPhone during the settlement period, keep an eye on your email and regular mail. You may qualify for a payment if the court approves the deal.
You should also keep your receipt or proof of purchase if you have it. Your Apple purchase history, carrier account or retailer receipt may help if the claim process asks for details.
More broadly, this is a reminder to treat AI features like any other big tech promise. Before you upgrade, ask one simple question: Can the feature do what is being advertised today, or is the company asking me to wait?
That question can save you from buying a device for a future feature that may arrive much later than expected.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my quiz here: CyberGuy.com.
Kurt’s key takeaways
Apple has built its brand on making technology feel polished, personal and easy to use. That is why this Siri settlement hits a nerve. People were buying phones they use every day for texts, photos, directions, reminders and everything in between. Many expected AI to make those everyday tasks easier, which is why the delay felt frustrating. The proposed payout may be modest, but the bigger issue is trust. When a company sells AI as a reason to upgrade, customers deserve to know what actually works now and what is still coming later.
Would you still buy a new phone for promised AI features, or would you wait until they actually show up? Let us know by writing to us at CyberGuy.com.
Sign up for my FREE CyberGuy Report
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
- Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox.
- For simple, real-world ways to spot scams early and stay protected, visit CyberGuy.com – trusted by millions who watch CyberGuy on TV daily.
- Plus, you’ll get instant access to my Ultimate Scam Survival Guide free when you join.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Instagram hits the copy button again with new disappearing Instants photos
Instagram is once again cribbing from competitors like Snapchat and BeReal with a new photo-sharing format it calls “Instants,” which are ephemeral photos that you can’t edit and that you can only share with your close friends or followers that follow you back. Instants are available globally beginning on Wednesday as a feature in the inbox in the Instagram app and as a separate app that’s now in testing in select countries.
To access Instants from the Instagram app, go to your DM inbox and look in the bottom-right corner for an icon or a stack of photos. After you post a photo, your friends can emoji react to it and send a reply to your DMs, but after they see it, the photo disappears for them. Instants also disappear after 24 hours, and they can’t be captured in screenshots or screen recordings.
However, your Instants will remain in an archive for you for up to a year, and you can reshare them as a recap to your Instagram Stories if you’d like. You can also undo sending an Instant right after you post it or delete it from your archive.
The Instants mobile app, which popped up in Italy and Spain in April, gives you “immediate access to the camera” and only requires an Instagram account, Instagram says. “Instants you share on the separate app will show up for friends on Instagram and vice versa. We’re trying this separate app out to see how our community uses it, and we’ll continue to evolve it as we learn more.”
Instagram, in its testing, has seen that people “tend to use Instants to share much more casual, much more authentic moments about their day,” according to Instagram boss Adam Mosseri. “And we know that this type of sharing of personal moments with friends is a core part of what makes Instagram Instagram, but we also know that a lot of people don’t really share a lot to their profile grids anymore.”
-
Minneapolis, MN3 minutes agoPTSD leave policy adds financial pressure to Minneapolis Fire Department
-
Indianapolis, IN9 minutes ago
Conor Daly, Alex Palou become 1st drivers to top 228 mph on 2nd day of Indianapolis 500 practice
-
Pittsburg, PA15 minutes agoWhere to watch Colorado Rockies vs Pittsburgh Pirates: TV channel, start time, streaming for
-
Augusta, GA21 minutes agoSouth Georgia wildfires 90% contained, but hot spots still a concern
-
Washington, D.C27 minutes agoThe Work Behind the Welcome: NPS Tradespeople Restore Dupont Circle, Making D.C. Safer and More Beautiful (U.S. National Park Service)
-
Cleveland, OH33 minutes agoThe Movie Nerd Report: Independent movie premieres in Cleveland this week – The Land
-
Austin, TX39 minutes ago
Jane Austin Improv celebrates third anniversary with Texas shows & a national NYC stage
-
Alabama45 minutes agoAlabama elections 2026: Who is running for U.S. Senate and House?