Connect with us

Technology

Watermarking the future

Published

on

Watermarking the future

A video of Elizabeth Warren saying Republicans shouldn’t vote went viral in 2023. But it wasn’t Warren. That video of Ron DeSantis wasn’t the Florida governor, either. And nope, Pope Francis was not wearing a white Balenciaga coat. 

Generative AI has made it easier to create deepfakes and spread them around the internet. One of the most common proposed solutions involves the idea of a watermark that would identify AI-generated content. The Biden administration has made a big deal out of watermarks as a policy solution, even specifically mandating tech companies to find ways to identify AI-generated content. The president’s executive order on AI, released in November, was built on commitments from AI developers to figure out a way to tag content as AI generated. And it’s not just coming from the White House — legislators, too, are looking at enshrining watermarking requirements as law. 

Watermarking can’t be a panacea — for one thing, most systems simply don’t have the capacity to tag text the way it can tag visual media. Still, people are familiar enough with watermarks that the idea of watermarking an AI-generated image feels natural. 

Pretty much everyone has seen a watermarked image. Getty Images, which distributes licensed photos taken at events, uses a watermark so ubiquitous and so recognizable that it is its own meta-meme. (In fact, the watermark is now the basis of Getty’s lawsuit against the AI-generation platform Midjourney, with Getty alleging that Midjourney must have taken its copyrighted content since it generates the Getty watermark in its output.) Of course, artists were signing their works long before digital media or even the rise of photography, in order to let people know who created the painting. But watermarking itself — according to A History of Graphic Design —  began during the Middle Ages, when monks would change the thickness of the printing paper while it was wet and add their own mark. Digital watermarking rose in the ‘90s as digital content grew in popularity. Companies and governments began putting tags (hidden or otherwise) to make it easier to track ownership, copyright, and authenticity. 

Watermarks will, as before, still denote who owns and created the media that people are looking at. But as a policy solution for the problem of deepfakes, this new wave of watermarks would, in essence, tag content as either AI or human generated. Adequate tagging from AI developers would, in theory, also show the provenance of AI-generated content, thus additionally addressing the question of whether copyrighted material was used in its creation. 

Advertisement

Tech companies have taken the Biden directive and are slowly releasing their AI watermarking solutions. Watermarking may seem simple, but it has one significant weakness: a watermark pasted on top of an image or video can be easily removed via photo or video editing. The challenge becomes, then, to make a watermark that Photoshop cannot erase. 

The challenge becomes, then, to make a watermark that Photoshop cannot erase. 

Companies like Adobe and Microsoft — members of the industry group Coalition for Content Provenance and Authenticity, or C2PA — have adopted Content Credentials, a standard that adds features to images and videos of its provenance. Adobe has created a symbol for Content Credentials that gets embedded in the media; Microsoft has its own version as well. Content Credentials embeds certain metadata — like who made the image and what program was used to create it — into the media; ideally, people will be able to click or tap on the symbol to look at that metadata themselves. (Whether this symbol can consistently survive photo editing is yet to be proven.) 

Meanwhile, Google has said it’s currently working on what it calls SynthID, a watermark that embeds itself into the pixels of an image. SynthID is invisible to the human eye, but still detectable via a tool. Digimarc, a software company that specializes in digital watermarking, also has its own AI watermarking feature; it adds a machine-readable symbol to an image that stores copyright and ownership information in its metadata. 

All of these attempts at watermarking look to either make the watermark unnoticable by the human eye or punt the hard work over to machine-readable metadata. It’s no wonder: this approach is the most surefire way information can be stored without it being removed, and encourages people to look closer at the image’s provenance. 

Advertisement

That’s all well and good if what you’re trying to build is a copyright detection system, but what does that mean for deepfakes, where the problem is that fallible human eyes are being deceived? Watermarking puts the burden on the consumer, relying on an individual’s sense that something isn’t right for information. But people generally do not make it a habit to check the provenance of anything they see online. Even if a deepfake is tagged with telltale metadata, people will still fall for it — we’ve seen countless times that when information gets fact-checked online, many people still refuse to believe the fact-checked information.

Experts feel a content tag is not enough to prevent disinformation from reaching consumers, so why would watermarking work against deepfakes?  

The best thing you can say about watermarks, it seems, is that at least it’s anything at all. And due to the sheer scale of how much AI-generated content can be quickly and easily produced, a little friction goes a long way.

After all, there’s nothing wrong with the basic idea of watermarking. Visible watermarks signal authenticity and may encourage people to be more skeptical of media without it. And if a viewer does find themselves curious about authenticity, watermarks directly provide that information. 

The best thing you can say about watermarks, it seems, is that at least it’s anything at all.

Advertisement

Watermarking can’t be a perfect solution for the reasons I’ve listed (and besides that, researchers have been able to break many of the watermarking systems out there). But it works in tandem with a growing wave of skepticism toward what people see online. I have to confess when I began writing this, I’d believed that it’s easy to fool people into believing really good DALL-E 3 or Midjourney photos were made by humans. However, I realized that discourse around AI art and deepfakes has seeped into the consciousness of many chronically online people. Instead of accepting magazine covers or Instagram posts as authentic, there’s now an undercurrent of doubt. Social media users regularly investigate and call out brands when they use AI. Look at how quickly internet sleuths called out the opening credits of Secret Invasion and the AI-generated posters in True Detective

It’s still not an excellent strategy to rely on a person’s skepticism, curiosity, or willingness to find out if something is AI-generated. Watermarks can do good, but there has to be something better. People are more dubious of content, but we’re not fully there yet. Someday, we might find a solution that conveys something is made by AI without hoping the viewer wants to find out if it is. 

For now, it’s best to learn to recognize if a video isn’t really of a politician. 

Advertisement
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Junji Ito’s terrifying Uzumaki hits Adult Swim in September

Published

on

Junji Ito’s terrifying Uzumaki hits Adult Swim in September
Image: Adult Swim

Adult Swim’s long-awaited adaptation of Uzumaki finally has a premiere date — and an appropriately creepy trailer. The series, based on the classic horror manga from Junji Ito, will start airing on September 28th. Episodes will hit Adult Swim first, and then stream on Max the following day.

Uzumaki follows a cursed town that is — and I promise it’s scarier than it sounds — plagued by spirals. Here’s the full synopsis:

“Let’s leave this town together,” asks Shuichi Saito, a former classmate of Kirie Goshima, a high school girl who was born and grew up in Kurouzu-cho. Everything from a strange whirlwind, billowing smoke from the crematorium, and the residents is turning into spirals. People’s eyes spin in whirls, a tongue spirals, and the…

Continue reading…

Continue Reading

Technology

New prosthetics restore natural movement via nerve connection

Published

on

New prosthetics restore natural movement via nerve connection

In the world of prosthetics, a groundbreaking advancement is changing the game for individuals with lower-limb amputations. 

Researchers at MIT, in collaboration with Brigham and Women’s Hospital, have developed a neuroprosthetic system that allows users to control their prosthetic legs using their own nervous systems. 

This innovative approach could bring us closer to a future of fully integrated, naturally controlled artificial limbs.

GET SECURITY ALERTS, EXPERT TIPS – SIGN UP FOR KURT’S NEWSLETTER – THE CYBERGUY REPORT HERE

A person wearing the neuroprosthetic system (Hugh Herr and Hyungeun Song)

Advertisement

The AMI: A surgical game-changer

At the heart of this breakthrough is a surgical procedure known as the agonist-antagonist myoneural interface, or AMI. Unlike traditional amputation methods, the AMI reconnects muscles in the residual limb, preserving the natural push-pull dynamics of muscle pairs. This seemingly simple change has profound implications for prosthetic control and function.

prosthetics 2

Illustration of how the neuroprosthetic system works (MIT Media Lab)

Dr. Hugh Herr, a professor at MIT and senior author of the study, explained the significance: “This is the first prosthetic study in history that shows a leg prosthesis under full neural modulation, where a biomimetic gait emerges. No one has been able to show this level of brain control that produces a natural gait, where the human’s nervous system is controlling the movement, not a robotic control algorithm.”

HOW TO STOP ANNOYING ROBOCALLS

prosthetics 3

Dr. Hugh Herr pictured with the neuroprosthetic system (Jimmy Day, MIT Media Lab)

AI-DRIVEN EXOSKELETON LIGHTENS YOUR LOAD AND ELEVATES PERFORMANCES

The power of proprioception

The key advantage of the AMI system is its ability to provide users with proprioceptive feedback, the sense of where their limb is in space. This sensory information, often taken for granted by those with intact limbs, is crucial for natural movement and control. With the AMI, patients regain a portion of this vital feedback, allowing them to walk more naturally and confidently.

Advertisement

In the study, seven patients with AMI surgery were compared to seven with traditional amputations. The results were striking. AMI patients walked faster, navigated obstacles more easily and climbed stairs with greater agility. They also demonstrated more natural movements, such as pointing their toes upward when stepping over obstacles, a subtle but important aspect of a natural gait.

CYBERCRIMINALS TAKING ADVANTAGE OF CROWDSTRIKE-LINKED GLOBAL COMPUTER OUTAGE

prosthetics 4

A person wearing the neuroprosthetic system (Hugh Herr and Hyungeun Song)

CLICK HERE FOR MORE US NEWS

Adapting to real-world challenges

One of the most impressive aspects of the AMI system is its versatility. Patients were able to adapt their gait to various real-world conditions, including walking on slopes and navigating stairs. This adaptability is crucial for everyday life, where terrain and challenges can change rapidly.

The system’s responsiveness was put to the test in an obstacle-crossing trial. AMI patients were able to modify their gait to clear obstacles more effectively than those with traditional prosthetics. This ability to rapidly adjust to unexpected challenges is a hallmark of natural limb function and represents a significant leap forward in prosthetic technology.

Advertisement
prosthetics 5

A person wearing the neuroprosthetic system (Hugh Herr and Hyungeun Song)

AI WEARABLE CONTRAPTION GIVES YOU SUPERHUMAN STRENGTH

The science of sensory feedback

The success of the AMI system hinges on its ability to augment residual muscle afferents, which are the sensory signals sent from muscles to the nervous system. Remarkably, even a modest increase in these signals allows for significantly improved control and function. This finding highlights the incredible adaptability of the human nervous system and its ability to integrate and utilize even partial sensory information.

Dr. Hyungeun Song, lead author of the study, notes: “One of the main findings here is that a small increase in neural feedback from your amputated limb can restore significant bionic neural controllability, to a point where you allow people to directly neurally control the speed of walking, adapt to different terrain and avoid obstacles.”

prosthetics 6

A person wearing the neuroprosthetic system (Hugh Herr and Hyungeun Song)

Looking to the future

While this research represents a significant step forward, it’s just the beginning. The team at MIT is exploring ways to further enhance sensory feedback and improve the integration between the human nervous system and prosthetic devices. The AMI procedure has already been performed on about 60 patients worldwide, including those with arm amputations, suggesting broad applicability across different types of limb loss.

Advertisement

As this technology continues to evolve, we may see even more natural and intuitive control of artificial limbs. The ultimate goal is to create prosthetics that feel and function like a natural part of the user’s body, blurring the line between human and machine.

SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES

prosthetics 7

A person wearing the neuroprosthetic system (Hugh Herr and Hyungeun Song)

Kurt’s key takeaways

The development of prosthetic limbs controlled by the nervous system marks the beginning of a new era in bionics. It offers hope for improved mobility, independence and quality of life for millions of people living with limb loss. Moreover, it provides valuable insights into the plasticity of the human nervous system and our ability to integrate with advanced technology.

As we continue to push the boundaries of what’s possible in merging biology and technology, we open up new frontiers in human augmentation and rehabilitation. The implications extend far beyond prosthetics, potentially influencing fields such as neurology, robotics and even our understanding of human consciousness and embodiment.

Advertisement

How comfortable would you be with technology that directly interfaces with your nervous system? Let us know by writing us at Cyberguy.com/Contact.

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Ask Kurt a question or let us know what stories you’d like us to cover.

Follow Kurt on his social channels:

Answers to the most asked CyberGuy questions:

Advertisement

Copyright 2024 CyberGuy.com. All rights reserved.

Continue Reading

Technology

Here’s your first look at Amazon’s Like a Dragon: Yakuza

Published

on

Here’s your first look at Amazon’s Like a Dragon: Yakuza

Amazon says that the show “showcases modern Japan and the dramatic stories of these intense characters, such as the legendary Kazuma Kiryu, that games in the past have not been able to explore.” Kiryu will be played by Ryoma Takeuchi, while Kento Kaku also starts as Akira Nishikiyama. The series is directed by Masaharu Take.

Like a Dragon: Yakuza starts streaming on Prime Video on October 24th with its first three episodes.

Continue Reading
Advertisement

Trending