Connect with us

Technology

Watermarking the future

Published

on

Watermarking the future

A video of Elizabeth Warren saying Republicans shouldn’t vote went viral in 2023. But it wasn’t Warren. That video of Ron DeSantis wasn’t the Florida governor, either. And nope, Pope Francis was not wearing a white Balenciaga coat. 

Generative AI has made it easier to create deepfakes and spread them around the internet. One of the most common proposed solutions involves the idea of a watermark that would identify AI-generated content. The Biden administration has made a big deal out of watermarks as a policy solution, even specifically mandating tech companies to find ways to identify AI-generated content. The president’s executive order on AI, released in November, was built on commitments from AI developers to figure out a way to tag content as AI generated. And it’s not just coming from the White House — legislators, too, are looking at enshrining watermarking requirements as law. 

Watermarking can’t be a panacea — for one thing, most systems simply don’t have the capacity to tag text the way it can tag visual media. Still, people are familiar enough with watermarks that the idea of watermarking an AI-generated image feels natural. 

Pretty much everyone has seen a watermarked image. Getty Images, which distributes licensed photos taken at events, uses a watermark so ubiquitous and so recognizable that it is its own meta-meme. (In fact, the watermark is now the basis of Getty’s lawsuit against the AI-generation platform Midjourney, with Getty alleging that Midjourney must have taken its copyrighted content since it generates the Getty watermark in its output.) Of course, artists were signing their works long before digital media or even the rise of photography, in order to let people know who created the painting. But watermarking itself — according to A History of Graphic Design —  began during the Middle Ages, when monks would change the thickness of the printing paper while it was wet and add their own mark. Digital watermarking rose in the ‘90s as digital content grew in popularity. Companies and governments began putting tags (hidden or otherwise) to make it easier to track ownership, copyright, and authenticity. 

Watermarks will, as before, still denote who owns and created the media that people are looking at. But as a policy solution for the problem of deepfakes, this new wave of watermarks would, in essence, tag content as either AI or human generated. Adequate tagging from AI developers would, in theory, also show the provenance of AI-generated content, thus additionally addressing the question of whether copyrighted material was used in its creation. 

Advertisement

Tech companies have taken the Biden directive and are slowly releasing their AI watermarking solutions. Watermarking may seem simple, but it has one significant weakness: a watermark pasted on top of an image or video can be easily removed via photo or video editing. The challenge becomes, then, to make a watermark that Photoshop cannot erase. 

The challenge becomes, then, to make a watermark that Photoshop cannot erase. 

Companies like Adobe and Microsoft — members of the industry group Coalition for Content Provenance and Authenticity, or C2PA — have adopted Content Credentials, a standard that adds features to images and videos of its provenance. Adobe has created a symbol for Content Credentials that gets embedded in the media; Microsoft has its own version as well. Content Credentials embeds certain metadata — like who made the image and what program was used to create it — into the media; ideally, people will be able to click or tap on the symbol to look at that metadata themselves. (Whether this symbol can consistently survive photo editing is yet to be proven.) 

Meanwhile, Google has said it’s currently working on what it calls SynthID, a watermark that embeds itself into the pixels of an image. SynthID is invisible to the human eye, but still detectable via a tool. Digimarc, a software company that specializes in digital watermarking, also has its own AI watermarking feature; it adds a machine-readable symbol to an image that stores copyright and ownership information in its metadata. 

All of these attempts at watermarking look to either make the watermark unnoticable by the human eye or punt the hard work over to machine-readable metadata. It’s no wonder: this approach is the most surefire way information can be stored without it being removed, and encourages people to look closer at the image’s provenance. 

Advertisement

That’s all well and good if what you’re trying to build is a copyright detection system, but what does that mean for deepfakes, where the problem is that fallible human eyes are being deceived? Watermarking puts the burden on the consumer, relying on an individual’s sense that something isn’t right for information. But people generally do not make it a habit to check the provenance of anything they see online. Even if a deepfake is tagged with telltale metadata, people will still fall for it — we’ve seen countless times that when information gets fact-checked online, many people still refuse to believe the fact-checked information.

Experts feel a content tag is not enough to prevent disinformation from reaching consumers, so why would watermarking work against deepfakes?  

The best thing you can say about watermarks, it seems, is that at least it’s anything at all. And due to the sheer scale of how much AI-generated content can be quickly and easily produced, a little friction goes a long way.

After all, there’s nothing wrong with the basic idea of watermarking. Visible watermarks signal authenticity and may encourage people to be more skeptical of media without it. And if a viewer does find themselves curious about authenticity, watermarks directly provide that information. 

The best thing you can say about watermarks, it seems, is that at least it’s anything at all.

Advertisement

Watermarking can’t be a perfect solution for the reasons I’ve listed (and besides that, researchers have been able to break many of the watermarking systems out there). But it works in tandem with a growing wave of skepticism toward what people see online. I have to confess when I began writing this, I’d believed that it’s easy to fool people into believing really good DALL-E 3 or Midjourney photos were made by humans. However, I realized that discourse around AI art and deepfakes has seeped into the consciousness of many chronically online people. Instead of accepting magazine covers or Instagram posts as authentic, there’s now an undercurrent of doubt. Social media users regularly investigate and call out brands when they use AI. Look at how quickly internet sleuths called out the opening credits of Secret Invasion and the AI-generated posters in True Detective

It’s still not an excellent strategy to rely on a person’s skepticism, curiosity, or willingness to find out if something is AI-generated. Watermarks can do good, but there has to be something better. People are more dubious of content, but we’re not fully there yet. Someday, we might find a solution that conveys something is made by AI without hoping the viewer wants to find out if it is. 

For now, it’s best to learn to recognize if a video isn’t really of a politician. 

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

HP ZBook Ultra G1a review: a business-class workstation that’s got game

Published

on

HP ZBook Ultra G1a review: a business-class workstation that’s got game

Business laptops are typically dull computers foisted on employees en masse. But higher-end enterprise workstation notebooks sometimes get an interesting enough blend of power and features to appeal to enthusiasts. HP’s ZBook Ultra G1a is a nice example. It’s easy to see it as another gray boring-book for spendy business types, until you notice a few key specs: an AMD Strix Halo APU, lots of RAM, an OLED display, and an adequate amount of speedy ports (Thunderbolt 4, even — a rarity on AMD laptops).

I know from my time with the Asus ROG Flow Z13 and Framework Desktop that anything using AMD’s high-end Ryzen AI Max chips should make for a compelling computer. But those two are a gaming tablet and a small form factor PC, respectively. Here, you get Strix Halo and its excellent integrated graphics in a straightforward, portable 14-inch laptop — so far, the only one of its kind. That should mean great performance with solid battery life, and the graphics chops to hang with midlevel gaming laptops — all in a computer that wouldn’t draw a second glance in a stuffy office. It’s a decent Windows (or Linux) alternative to a MacBook Pro, albeit for a very high price.

$3499

The Good

  • Great screen, keyboard, and trackpad
  • Powerful AMD Strix Halo chip
  • Solid port selection with Thunderbolt 4
  • Can do the work stuff, the boring stuff, and also game

The Bad

  • Expensive
  • Strix Halo can be power-hungry
  • HP’s enterprise-focused security software is nagging

The HP ZBook Ultra G1a starts around $2,100 for a modest six-core AMD Ryzen AI Max Pro 380 processor, 16GB of shared memory, and basic IPS display. Our review unit is a much higher-spec configuration with a 16-core Ryzen AI Max Plus Pro 395, 2880 x 1800 resolution 120Hz OLED touchscreen, 2TB of storage, and a whopping 128GB of shared memory, costing nearly $4,700. I often see it discounted by $1,000 or more — still expensive, but more realistic for someone seeking a MacBook Pro alternative. Having this much shared memory is mostly useful for hefty local AI inference workloads and serious dataset crunching; most people don’t need it. But with the ongoing memory shortage I’d also understand wanting to futureproof.

  • Screen: A
  • Webcam: B
  • Keyboard: B
  • Trackpad: B
  • Port selection: B
  • Speakers: B
  • Number of ugly stickers to remove: 1 (only a Windows sticker on the bottom)

Unlike cheaper HP laptops I’ve tested that made big sacrifices on everyday features like speaker quality, the ZBook Ultra G1a is very good across the board. The OLED is vibrant, with punchy contrast. The keyboard has nice tactility and deep key travel. The mechanical trackpad is smooth, with a good click feel. The 5-megapixel webcam looks solid in most lighting. And the speakers have a full sound that I’m happy to listen to music on all day. I have my gripes, but they’re minor: The 400-nit screen could be a little brighter, the four-speaker audio system doesn’t sound quite as rich as current MacBook Pros, and my accidental presses of the Page Up and Page Down keys above the arrows really get on my nerves. These quibbles aren’t deal-breakers, though for the ZBook’s price I wish HP solved some of them.

The big thing you’re paying for with the ZBook Ultra is that top-end Strix Halo APU, which is so far only found in $2,000+ computers and a sicko-level gaming handheld, though there will be cut-down versions coming to cheaper gaming laptops this year.

The flagship 395 chip in the ZBook offers speedy performance for mixed-use work and enough battery life to eke out an eight-hour workday filled with Chrome tabs and web apps (with power-saving measures). I burned through battery in Adobe Lightroom Classic, but even though Strix Halo is less powerful when disconnected from wall power, the ZBook didn’t get bogged down. I blazed through a hefty batch edit of 47-megapixel RAW images without any particularly long waits on things like AI denoise or automated masking adjustments.

An understated workhorse of a laptop, for an opulent price.

An understated workhorse of a laptop, for an opulent price.

The ZBook stays cool and silent during typical use; pushing it under heavy loads only yields a little warmth in its center and a bit of tolerable fan noise that’s easily drowned out by music, a video, or a game at normal volume.

Advertisement

This isn’t a gaming-focused laptop any more than a MacBook Pro is, as its huge pool of shared memory and graphics cores are meant for workstation duties. However, this thing can game. I spent an entire evening playing Battlefield 6 with friends, with Discord and Chrome open in the background, and the whole time it averaged 70 to 80fps in 1920 x 1200 resolution with Medium preset settings and FSR set to Balanced mode — with peaks above 100fps. Running it at the native 2880 x 1800 got a solid 50-ish fps that’s fine for single-player.

Intel’s new Panther Lake chips also have great integrated graphics for gaming, while being more power-efficient. But Strix Halo edges out Panther Lake in multi-core tasks and graphics, with the flagship 395 version proving as capable as a laptop RTX 4060 discrete GPU. AMD’s beefy mobile chips have also proven great for Linux if you’re looking to get away from Windows.

HP Zbook Ultra G1a / Ryzen AI Max Plus Pro 395 (Strix Halo) / 128GB / 2TB

Asus Zenbook Duo / Intel Core Ultra X9 388H (Panther Lake) / 32GB / 1TB

MacBook Pro 14 / Apple M5 / 16GB / 1TB

MacBook Pro 16 / Apple M4 Pro / 48GB / 2TB

Asus ROG Flow Z13/ AMD Ryzen AI Max Plus 395 (Strix Halo) / 32GB / 1TB

Framework Desktop / AMD Ryzen AI Max Plus 395 (Strix Halo) / 128GB / 1TB

CPU cores 16 16 10 14 16 16
Graphics cores 40 12 10 20 40 40
Geekbench 6 CPU Single 2826 3009 4208 3976 2986 2961
Geekbench 6 CPU Multi 18125 17268 17948 22615 19845 17484
Geekbench 6 GPU (OpenCL) 85139 56839 49059 70018 80819 86948
Cinebench 2024 Single 113 129 200 179 116 115
Cinebench 2024 Multi 1614 983 1085 1744 1450 1927
PugetBench for Photoshop 10842 8773 12354 12374 10515 10951
PugetBench for Premiere Pro (version 2.0.0+) 78151 54920 71122 Not tested Not tested Not tested
Premiere 4K Export (shorter time is better) 2 minutes, 39 seconds 3 minutes, 3 seconds 3 minutes, 14 seconds 2 minutes, 13 seconds Not tested 2 minutes, 34 seconds
Blender Classroom test (seconds, lower is better) 154 61 44 Not tested Not tested 135
Sustained SSD reads (MB/s) 6969.04 6762.15 7049.45 6737.84 6072.58 Not tested
Sustained SSD writes (MB/s) 5257.17 5679.41 7317.6 7499.56 5403.13 Not tested
3DMark Time Spy (1080p) 13257 9847 Not tested Not tested 12043 17620
Price as tested $4,689 $2,299.99 $1,949 $3,349 $2,299.99 $2,459

In addition to Windows 11’s upsells and nagging notifications, the ZBook also has HP’s Wolf Security, designed for deployment on an IT-managed fleet of company laptops. For someone not using this as a work-managed device, its extra layer of protections may be tolerable, but they’re annoying. They range from warning you about files from an “untrusted location” (fine) to pop-ups when plugging in a non-HP USB-C charger (infuriating). You can turn off and uninstall all of this, same as you can for the bloatware AI Companion and Support Assistant apps, but it’s part of what HP charges for on its Z workstation line.

You don’t need to spend this kind of money on a kitted-out ZBook Ultra G1a unless you do the kind of specialized computing (local AI models, mathematical simulations, 3D rendering, etc.) it’s designed for. There’s a more attainable configuration, frequently on sale for around $2,500, but its 12-core CPU, lower-specced GPU, and 64GB of shared memory are a dip in performance.

Thunderbolt 4? On an AMD laptop?

Heresy! (I like heresy.)

If you’re mostly interested in gaming, an Asus ROG Zephyrus G14 or even a Razer Blade 16 make a hell of a lot more sense. For about the price of our ZBook Ultra review unit, the Razer gets you an RTX 5090 GPU, with much more powerful gaming performance, while the more modest ROG Zephyrus G14 with an RTX 5060 gets you comparable gaming performance to the ZBook Ultra in a similar form factor for nearly $3,000 less. The biggest knock against those gaming laptops compared to the ZBook is that their fans get much louder under load.

Advertisement

And while it’s easy to think of a MacBook Pro as the lazy answer to all computing needs, it still should be said: If you don’t mind macOS, you can get a whole lot more (non-gaming) performance from an M4 Pro / M4 Max MacBook Pro. Even sticking with Windows and integrated graphics, the Asus Zenbook Duo with Panther Lake at $2,300 is a deal by comparison, once it launches.

1/7

This keyboard is excellent.

At $4,700, this is a specific machine for specialized workloads. It’s a travel-friendly 14-inch that can do a bit of everything, but it’s a high price for a jack of all trades if you’re spending your own money. The ZBook piqued my interest because it’s one of the earliest examples of Strix Halo in a conventional laptop. After using it, I’m even more excited to see upcoming models at more down-to-earth prices.

2025 HP ZBook Ultra G1a specs (as reviewed)

  • Display: 14-inch (2880 x 1800) 120Hz OLED touchscreen
  • CPU: AMD Ryzen AI Max Plus Pro 395 (Strix Halo)
  • RAM: 128GB LPDDR5x memory, shared with the GPU
  • Storage: 2TB PCIe 4.0 M.2 NVMe SSD
  • Webcam: 5-megapixel with IR and privacy shutter
  • Connectivity: Wi-Fi 7, Bluetooth 5.4
  • Ports: 2x Thunderbolt 4 / USB-C (up to 40Gbps with Power Delivery and DisplayPort), 1x USB-C 3.2 Gen 2, 1x USB-A 3.2 Gen 2, HDMI 2.1, 3.5mm combo audio jack
  • Biometrics: Windows Hello facial recognition, power button with fingerprint reader
  • Weight: 3.46 pounds / 1.57kg
  • Dimensions: 12.18 x 8.37 x 0.7 inches / 309.37 x 212.60 x 17.78mm
  • Battery: 74.5Whr
  • Price: $4,689

Photography by Antonio G. Di Benedetto / The Verge

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Advertisement

Continue Reading

Technology

Warm-skinned AI robot with camera eyes is seriously creepy

Published

on

Warm-skinned AI robot with camera eyes is seriously creepy

NEWYou can now listen to Fox News articles!

Humanoid robots are no longer hiding in research labs somewhere. These days, they are stepping into public spaces, and they are starting to look alarmingly human. 

A Shanghai startup has now taken that idea further by unveiling what it calls the world’s first biometric AI robot. Yes, it is as creepy as it sounds. The robot is called Moya, and it comes from DroidUp, also known as Zhuoyide. The company revealed Moya at a launch event in Zhangjiang Robotics Valley, a growing hotspot for humanoid development in China. 

At first glance, you can still tell Moya is a robot. The skin looks plasticky. The eyes feel vacant. The movements are slightly off. Then you learn more details about her, and that’s when the discomfort kicks in.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

Warm skin makes this humanoid robot feel unsettling

HUMANOID ROBOTS ARE GETTING SMALLER, SAFER AND CLOSER

Even when standing still, the robot’s posture and proportions blur the line between machine and person in a way many people find unsettling.  (DroidUp)

Most robots feel cold and mechanical. Moya does not. According to DroidUp, Moya’s body temperature sits between 90°F and 97°F, roughly the same range as a human. Company founder Li Qingdu says robots meant to serve people should feel warm and approachable. That idea sounds thoughtful until you picture a humanoid with warm skin standing next to you in a quiet hallway. DroidUp says this design points toward future use in healthcare, education and commercial settings. It also sees Moya as a daily companion. That idea may excite engineers. However, for many people, it triggers the opposite reaction. Warmth removes one of the few clear signals that separates machines from humans. Once that line blurs, discomfort grows fast.

Why this humanoid robot’s walk feels so off

Moya does not roll or glide. She walks. DroidUp says her walking motion is 92 percent accurate, though it is not clear how that number is calculated. On screen, the movement feels cautious and a little stiff. It looks like someone is moving carefully after leg day at the gym. The hardware underneath is doing real work. Moya runs on the Walker 3 skeleton, an updated system connected to a bronze medal finish at the world’s first robot half-marathon in Beijing in April 2025. Put simply, robots are getting better at moving through everyday spaces. Watching one do it this convincingly feels strange, not impressive. It makes you stop and stare, then wonder why it feels so uncomfortable.

Camera eyes and facial reactions raise privacy concerns

Behind Moya’s eyes sit cameras. Those cameras allow her to interact with people and respond with subtle facial movements, often called microexpressions. Add onboard AI and DroidUp now labels Moya a fully biomimetic embodied intelligent robot. That phrase sounds impressive. It also raises obvious questions. If a humanoid robot can see you, track your reactions and mirror emotional cues, trust becomes complicated. You may forget you are interacting with a machine. You may act differently. That shift has consequences in public spaces. This is AI moving out of screens and into physical proximity. Once that happens, the stakes change.

Advertisement

Price alone keeps this robot out of your home

If you are worried about waking up to a warm-skinned humanoid in your home, relax for now. Moya is expected to launch in late 2026 at roughly $173,000. That price places her firmly in institutional territory.  DroidUp sees the robot working in train stations, banks, museums and shopping malls. Tasks would include guidance, information and public service interactions. That still leaves plenty of people uneasy, especially those whose jobs already feel vulnerable to automation. For homes, the future still looks more like robot vacuums than walking companions.

Up close, Moya’s eyes look almost human, which raises questions about how much realism is too much for robots meant to operate in public spaces.  (DroidUp)

WORLD’S FIRST AI-POWERED INDUSTRIAL SUPER-HUMANOID ROBOT

What this means to you

This is not about buying a humanoid robot tomorrow. It is about where technology is heading. Warm skin, camera eyes and human-like movement signal a shift in design priorities. Engineers want robots that blend in socially. The more they succeed, the harder it becomes to maintain clear boundaries. As these machines enter public spaces, questions about consent, surveillance and emotional manipulation will follow. Even if the robot is polite and helpful, the presence alone changes how people behave. Creepy reactions are not irrational. They are early warning signs.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

Advertisement

Kurt’s key takeaways

Moya’s debut feels worth paying attention to because she is real enough to trigger discomfort almost instantly. That reaction matters. It suggests people are being asked to get used to lifelike machines before they have time to question what that really means. Humanoid robots do not need warm skin to be helpful. They do not need faces to point someone in the right direction. Still, companies keep pushing toward realism, even when it makes people uneasy. In tech, speed often comes before reflection, and this is one area where slowing down might matter more than racing ahead.

If a warm-skinned robot with camera eyes greeted you out in public, would you trust it or avoid eye contact and walk faster? Let us know by writing to us at Cyberguy.com.

Moya’s humanlike appearance is intentional, from her warm skin to subtle facial details designed to feel familiar rather than mechanical.   (DroidUp)

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Advertisement

Copyright 2026 CyberGuy.com. All rights reserved.

Continue Reading

Technology

Two more xAI co-founders are among those leaving after the SpaceX merger

Published

on

Two more xAI co-founders are among those leaving after the SpaceX merger

Since the xAI-SpaceX merger announced last week, which combined the two companies (as well as social media platform X) for a reported $1.25 trillion valuation — the biggest merger of all time — a handful of xAI employees and two of its co-founders have abruptly exited the company, penning long departure announcements online. Some also announced that they were starting their own AI companies.

Co-founder Yuhai (Tony) Wu announced his departure on X, writing that it was “time for [his] next chapter.” Jimmy Ba, another co-founder, posted something similar later that day, saying it was “time to recalibrate [his] gradient on the big picture.” The departures mean that xAI is now left with only half of its original 12 co-founders on staff.

It all comes after changing plans for the future of the combined companies, which Elon Musk recently announced would involve “space-based AI” data centers and vertical integration involving “AI, rockets, space-based internet, direct-to-mobile device communications and the world’s foremost real-time information and free speech platform.” Musk reportedly also talked of plans to build an AI satellite factory and city on the moon in an internal xAI meeting.

Musk wrote on X Wednesday that “xAI was reorganized a few days ago to improve speed of execution” and claimed that the process “unfortunately required parting ways with some people,” then put out a call for more people to apply to the company. He also posted a recording of xAI’s 45-minute internal all-hands meeting that announced the changes.

“We’re organizing the company to be more effective at this scale,” Musk said during the meeting. He added that the company will now be organized in four main application areas: Grok Main and Voice, Coding, Imagine (image and video), and Macrohard (“which is intended to do full digital emulation of entire companies,” Musk said).

Advertisement
Continue Reading

Trending