Connect with us

Technology

Watermarking the future

Published

on

Watermarking the future

A video of Elizabeth Warren saying Republicans shouldn’t vote went viral in 2023. But it wasn’t Warren. That video of Ron DeSantis wasn’t the Florida governor, either. And nope, Pope Francis was not wearing a white Balenciaga coat. 

Generative AI has made it easier to create deepfakes and spread them around the internet. One of the most common proposed solutions involves the idea of a watermark that would identify AI-generated content. The Biden administration has made a big deal out of watermarks as a policy solution, even specifically mandating tech companies to find ways to identify AI-generated content. The president’s executive order on AI, released in November, was built on commitments from AI developers to figure out a way to tag content as AI generated. And it’s not just coming from the White House — legislators, too, are looking at enshrining watermarking requirements as law. 

Watermarking can’t be a panacea — for one thing, most systems simply don’t have the capacity to tag text the way it can tag visual media. Still, people are familiar enough with watermarks that the idea of watermarking an AI-generated image feels natural. 

Pretty much everyone has seen a watermarked image. Getty Images, which distributes licensed photos taken at events, uses a watermark so ubiquitous and so recognizable that it is its own meta-meme. (In fact, the watermark is now the basis of Getty’s lawsuit against the AI-generation platform Midjourney, with Getty alleging that Midjourney must have taken its copyrighted content since it generates the Getty watermark in its output.) Of course, artists were signing their works long before digital media or even the rise of photography, in order to let people know who created the painting. But watermarking itself — according to A History of Graphic Design —  began during the Middle Ages, when monks would change the thickness of the printing paper while it was wet and add their own mark. Digital watermarking rose in the ‘90s as digital content grew in popularity. Companies and governments began putting tags (hidden or otherwise) to make it easier to track ownership, copyright, and authenticity. 

Watermarks will, as before, still denote who owns and created the media that people are looking at. But as a policy solution for the problem of deepfakes, this new wave of watermarks would, in essence, tag content as either AI or human generated. Adequate tagging from AI developers would, in theory, also show the provenance of AI-generated content, thus additionally addressing the question of whether copyrighted material was used in its creation. 

Advertisement

Tech companies have taken the Biden directive and are slowly releasing their AI watermarking solutions. Watermarking may seem simple, but it has one significant weakness: a watermark pasted on top of an image or video can be easily removed via photo or video editing. The challenge becomes, then, to make a watermark that Photoshop cannot erase. 

The challenge becomes, then, to make a watermark that Photoshop cannot erase. 

Companies like Adobe and Microsoft — members of the industry group Coalition for Content Provenance and Authenticity, or C2PA — have adopted Content Credentials, a standard that adds features to images and videos of its provenance. Adobe has created a symbol for Content Credentials that gets embedded in the media; Microsoft has its own version as well. Content Credentials embeds certain metadata — like who made the image and what program was used to create it — into the media; ideally, people will be able to click or tap on the symbol to look at that metadata themselves. (Whether this symbol can consistently survive photo editing is yet to be proven.) 

Meanwhile, Google has said it’s currently working on what it calls SynthID, a watermark that embeds itself into the pixels of an image. SynthID is invisible to the human eye, but still detectable via a tool. Digimarc, a software company that specializes in digital watermarking, also has its own AI watermarking feature; it adds a machine-readable symbol to an image that stores copyright and ownership information in its metadata. 

All of these attempts at watermarking look to either make the watermark unnoticable by the human eye or punt the hard work over to machine-readable metadata. It’s no wonder: this approach is the most surefire way information can be stored without it being removed, and encourages people to look closer at the image’s provenance. 

Advertisement

That’s all well and good if what you’re trying to build is a copyright detection system, but what does that mean for deepfakes, where the problem is that fallible human eyes are being deceived? Watermarking puts the burden on the consumer, relying on an individual’s sense that something isn’t right for information. But people generally do not make it a habit to check the provenance of anything they see online. Even if a deepfake is tagged with telltale metadata, people will still fall for it — we’ve seen countless times that when information gets fact-checked online, many people still refuse to believe the fact-checked information.

Experts feel a content tag is not enough to prevent disinformation from reaching consumers, so why would watermarking work against deepfakes?  

The best thing you can say about watermarks, it seems, is that at least it’s anything at all. And due to the sheer scale of how much AI-generated content can be quickly and easily produced, a little friction goes a long way.

After all, there’s nothing wrong with the basic idea of watermarking. Visible watermarks signal authenticity and may encourage people to be more skeptical of media without it. And if a viewer does find themselves curious about authenticity, watermarks directly provide that information. 

The best thing you can say about watermarks, it seems, is that at least it’s anything at all.

Advertisement

Watermarking can’t be a perfect solution for the reasons I’ve listed (and besides that, researchers have been able to break many of the watermarking systems out there). But it works in tandem with a growing wave of skepticism toward what people see online. I have to confess when I began writing this, I’d believed that it’s easy to fool people into believing really good DALL-E 3 or Midjourney photos were made by humans. However, I realized that discourse around AI art and deepfakes has seeped into the consciousness of many chronically online people. Instead of accepting magazine covers or Instagram posts as authentic, there’s now an undercurrent of doubt. Social media users regularly investigate and call out brands when they use AI. Look at how quickly internet sleuths called out the opening credits of Secret Invasion and the AI-generated posters in True Detective

It’s still not an excellent strategy to rely on a person’s skepticism, curiosity, or willingness to find out if something is AI-generated. Watermarks can do good, but there has to be something better. People are more dubious of content, but we’re not fully there yet. Someday, we might find a solution that conveys something is made by AI without hoping the viewer wants to find out if it is. 

For now, it’s best to learn to recognize if a video isn’t really of a politician. 

Advertisement
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

NBA Twitter’s latest ‘Woj Bomb’ was just an NFT scam

Published

on

NBA Twitter’s latest ‘Woj Bomb’ was just an NFT scam

People who still use NBA Top Shot were the primary targets of a scam tweet posted to ESPN reporter Adrian Wojnarowski’s account on X Saturday evening at about 6:30PM ET. The tweet referred to NBA Top Shot as a “popular” NFT platform, despite the fact that current activity levels are a tiny fraction of what we saw during its peak, and falsely claimed a “free NFT pack is available to all customers.”

The tweet linked visitors to a scam version of the NBA Top Shot website (the link went to a .org address instead of the official site’s .com URL) that could attempt to drain assets from people who give it access to their crypto wallets. About a half hour later, the official Top Shot account posted, saying, “There is NO Free Airdrop happening on NBA Top Shot at this time, Please be careful and always double check links.”

The post was eventually pulled from Wojnarowski’s account after being live for nearly an hour. Because of his reputation for breaking news tweets, many NBA fans have alerts turned on for his posts and could have had account information stolen if they clicked the fraudulent link.

A number of high-profile Twitter / X accounts continue to get compromised. Wojnarowski’s recent NBA news posts have also been syndicated on Threads, however that account was not used for the scam.

However, the latest NBA Top Shot stats from tracking site Cryptoslam.io only show about 8,100 unique sellers and 5,550 unique buyers for the month of January, down from the peak of more than 399,000 buyers in March 2021, so it’s doubtful there are very many people left using it to get scammed by this kind of post.

Advertisement
Continue Reading

Technology

Fox News AI Newsletter: Google's woke AI image fail

Published

on

Fox News AI Newsletter: Google's woke AI image fail

Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.

IN TODAY’S NEWSLETTER:

– Google apologizes after new Gemini AI refuses to show pictures, achievements of White people
– AI poised to bolster workplace efficiency and security, Cisco exec says
– Robo-calls no more as federal ruling makes clear statement on annoying practice

Gemini’s senior director of product management at Google has issued an apology after the AI refused to provide images of White people.  (Betul Abali/Anadolu via Getty Images)

RACIAL BIAS: The latest version of Google’s Gemini artificial intelligence (AI) will frequently produce images of Black, Native American and Asian people when prompted – but refuses to do the same White people.

AI BOOST: The rise of artificial intelligence (AI) tools is poised to yield greater workplace efficiency and has the potential to boost security even as bad actors look to exploit those tools.

Advertisement

REVOKE CONSENT: The Federal Communications Commission (FCC) put a final point on its reforms related to automatic or “robocalls” after deciding to ban the use of artificial intelligence (AI) generated voices for marketing calls.

Cisco AI cybersecurity

Cisco’s Jeetu Patel told FOX Business that cybersecurity and software development are areas where AI can help businesses facing a talent shortage. (Omar Marques/SOPA Images/LightRocket via Getty Images / Getty Images)

AI BOOM: Nvidia shares soared after the artificial intelligence powerhouse announced a massive jump in quarterly revenue from a year ago, reassuring investors that its AI edge is alive and well.

GETTING ‘TECH’NICAL: All the hype around generative artificial intelligence since the release of OpenAI’s ChatGPT has companies scrambling to hire talent who knows how to implement and harness the rapidly developing technology.

Nvidia processor AI

Nvidia logo displayed on a phone screen and microchip and are seen in this illustration photo taken in Krakow, Poland on July 19, 2023. (Jakub Porzycki/NurPhoto via Getty Images)

Subscribe now to get the Fox News Artificial Intelligence Newsletter in your inbox.

FOLLOW FOX NEWS ON SOCIAL MEDIA

Facebook
Instagram
YouTube
Twitter
LinkedIn

Advertisement

SIGN UP FOR OUR OTHER NEWSLETTERS

Fox News First
Fox News Opinion
Fox News Lifestyle
Fox News Health

DOWNLOAD OUR APPS

Fox News
Fox Business
Fox Weather
Fox Sports
Tubi

WATCH FOX NEWS ONLINE

Advertisement

Fox News Go

STREAM FOX NATION

Fox Nation

Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

Advertisement

Continue Reading

Technology

Google co-founder Sergey Brin sued over a plane crash that killed two pilots last year

Published

on

Google co-founder Sergey Brin sued over a plane crash that killed two pilots last year

Google co-founder Sergey Brin is facing a wrongful death lawsuit from the widow of one of two pilots who died in a plane crash off the coast of California in May 2023. It blames a poorly installed modification for the crash and claims his representatives intentionally slowed recovery efforts to destroy evidence, as previously reported by Bloomberg and Fortune.

An updated complaint filed on February 13th in the Santa Clara County Superior Court of California says Lance Maclean and co-pilot Dean Rushfedlt were contracted to bring Brin’s seaplane from California to Fiji for island-hopping with friends. Ferrying the $8 million, twin-engine Viking Air Twin Otter Series 400 that far required an auxiliary fuel system, which the complaint alleges a mechanic did “from memory” without consulting a checklist or logging it with the FAA.

While flying on the first leg of the flight to Hawaii, the fuel system failed, and the plane crashed into the ocean while trying to return to California. The Coast Guard arrived within 15 minutes but was unable to retrieve either of the pilots from the upside-down and partially submerged aircraft.

Aside from Brin, the lawsuit names Google and Brin’s family investment firm Bayshore Management, as co-owners of the plane, along with those responsible for setting up the flight and the plane’s maintenance.

Following their deaths, the suit says Brin had said he would help with recovery. But then, Brin’s representatives allegedly told Maclean’s widow, Maria Magdalena Olarte, that the National Oceanic and Atmospheric Administration (NOAA) was preventing them from recovering the bodies — a claim the NOAA denied, according to the complaint.

Advertisement

Olarte is seeking damages for five complaints, including wrongful death and survival negligence, and is demanding a jury trial.

Continue Reading

Trending