Connect with us

Technology

Hollywood cozied up to AI in 2025 and had nothing good to show for it

Published

on

Hollywood cozied up to AI in 2025 and had nothing good to show for it

AI isn’t new to Hollywood — but this was the year when it really made its presence felt. For years now, the entertainment industry has used different kinds of generative AI products for a variety of post-production processes ranging from de-aging actors to removing green screen backgrounds. In many instances, the technology has been a useful tool for human artists tasked with tedious and painstaking labor that might have otherwise taken them inordinate amounts of time to complete. But in 2025, Hollywood really began warming to the idea of deploying the kind of gen AI that’s really only good for conjuring up text-to-video slop that doesn’t have all that many practical uses in traditional production workflows. Despite all of the money and effort being put into it, there’s yet to be a gen-AI project that has shown why it’s worth all of the hype.

This confluence of Hollywood and AI didn’t start out so rosy. Studios were in a prime position to take the companies behind this technology to court because their video generation models had clearly been trained on copyrighted intellectual property. A number of major production companies including Disney, Universal, and Warner Bros. Discovery did file lawsuits against AI firms and their boosters for that very reason. But rather than pummeling AI purveyors into the ground, some of Hollywood’s biggest power players chose instead to get into bed with them. We have only just begun to see what can come from this new era of gen-AI partnerships, but all signs point to things getting much sloppier in the very near future.

Though many of this year’s gen-AI headlines were dominated by larger outfits like Google and OpenAI, we also saw a number of smaller players vying for a seat at the entertainment table. There was Asteria, Natasha Lyonne’s startup focused on developing film projects with “ethically” engineered video generation models, and startups like Showrunner, an Amazon-backed platform designed to let subscribers create animated “shows” (a very generous term) from just a few descriptive sentences plugged into Discord. These relatively new companies were all desperate to legitimize the idea that their flavor of gen AI could be used to supercharge film / TV development while bringing down overall production costs.

Asteria didn’t have anything more than hype to share with the public after announcing its first film, and it was hard to believe that normal people would be interested in paying for Showrunner’s shoddily cobbled-together knockoffs of shows made by actual animators. In the latter case, it felt very much like Showrunner’s real goal was to secure juicy partnerships with established studios like Disney that would lead to their tech being baked into platforms where users could prompt up bespoke content featuring recognizable characters from massive franchises.

That idea seemed fairly ridiculous when Showrunner first hit the scene because its models churn out the modern equivalent of clunky JibJab cartoons. But in due time, Disney made it clear that — crappy as text-to-video generators tend to be for anything beyond quick memes — it was interested in experimenting with that kind of content. In December, Disney entered into a three-year, billion-dollar licensing deal with OpenAI that would let Sora users make AI videos with 200 different characters from Star Wars, Marvel, and more.

Advertisement

Netflix became one of the first big studios to proudly announce that it was going all-in on gen AI. After using the technology to produce special effects for one of its original series, the streamer published a list of general guidelines it wanted its partners to follow if they planned to jump on the slop bandwagon as well. Though Netflix wasn’t mandating that filmmakers use gen AI, it made clear that saving money on VFX work was one of the main reasons it was coming out in support of the trend. And it wasn’t long before Amazon followed suit by releasing multiple Japanese anime series that were terribly localized into other languages because the dubbing process didn’t involve any human translators or voice actors.

Amazon’s gen-AI dubs became a shining example of how poorly this technology can perform. They also highlighted how some studios aren’t putting all that much effort into making sure that their gen AI-derived projects are polished enough to be released to the public. That was also true of Amazon’s machine-generated TV recaps, which frequently got details about different shows very wrong. Both of these fiascos made it seem as if Amazon somehow thought that people wouldn’t notice or care about AI’s inability to consistently generate high-quality outputs. The studio quickly pulled its AI-dubbed series and the recap feature down, but it didn’t say that it wouldn’t try this kind of nonsense again.

Disney-provided examples of its characters in Sora AI content.
Image: Disney

All of this and other dumb stunts like AI “actress” Tilly Norwood made it feel like certain segments of the entertainment industry were becoming more comfortable trying to foist gen-AI “entertainment” on people even though it left many people deeply unimpressed and put off. None of these projects demonstrated to the public why anyone except for money-pinching execs (and people who worship them for some reason) would be excited by a future shaped by this technology.

Aside from a few unimpressive images, we still haven’t seen what all might come from some of these collaborations, like Disney cozying up to OpenAI. But next year AI’s presence in Hollywood will be even more pronounced. Disney plans to dedicate an entire section of its streaming service to user-generated content sourced from Sora, and it will encourage Disney employees to use OpenAI’s ChatGPT products. But the deal’s real significance in this current moment is the message it sends to other studios about how they should move as Hollywood enters its slop era.

Advertisement

Regardless of whether Disney thinks this will work out well, the studio has signaled that it doesn’t want to be left behind if AI adoption keeps accelerating. That tells other production houses that they should follow suit, and if that becomes the case, there’s no telling how much more of this stuff we are all going to be forced to endure.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Technology

Samsung’s Digital Home Key lets you use your phone as your key

Published

on

Samsung’s Digital Home Key lets you use your phone as your key

Just days after showing off the Galaxy S26, Samsung is finally rolling out the ability for users to unlock their home with a tap of their phone or by simply approaching their door. The new feature, called Digital Home Key, will live inside Samsung Wallet and is powered by the Aliro smart home standard.

Samsung first teased its Digital Home Key feature in 2024 and said the feature would be available in 2025. That didn’t pan out, as the CSA’s Aliro standard — which will let users unlock smart locks with any phone — only arrived in February of this year. The new standard uses near-field communication (NFC) for its tap-to-unlock technology. It also supports ultra-wideband (UWB), giving users the ability to unlock their door as they approach and without pulling out their phone.

To add a Digital Home Key to your wallet, you’ll need to set up a compatible smart lock through SmartThings using Matter. Only some Galaxy smartphones support both NFC and UWB, including the Galaxy Z Fold 4 and up, as well as the Galaxy S22 Ultra and up. You can view the full list of compatible devices on Samsung’s website.

Continue Reading

Technology

China’s ultrasound brain tech race heats up

Published

on

China’s ultrasound brain tech race heats up

NEWYou can now listen to Fox News articles!

When you hear “brain-computer interface,” you probably picture surgery, wires and a chip in your head. Now picture something quieter. No implant. No incision. Just sound waves directed at the brain.

That is the approach behind a new wave of ultrasound brain-computer interface companies in China. One of the newest is Gestala, founded in Chengdu with offices in Shanghai and Hong Kong. The company says it is developing technology that can stimulate and eventually study brain activity using focused ultrasound.

Yes, the same basic technology is used in medical imaging. But this time, it targets neural circuits.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

Brain imaging highlights the regions researchers study as companies explore noninvasive ultrasound brain-computer interface technology. (Kurt “CyberGuy” Knutsson)

What is an ultrasound brain computer interface?

Most brain-computer interface systems rely on electrodes that detect electrical signals from neurons. Neuralink is the most visible example. It places tiny threads inside the brain to record activity. Ultrasound works differently.

Instead of measuring electrical signals directly, it uses high-frequency sound waves. Depending on intensity and focus, those waves can:

  • Create images of internal tissue
  • Destroy abnormal tissue such as tumors
  • Modulate neural activity without open surgery.

Focused ultrasound treatments are already approved for Parkinson’s disease, uterine fibroids and certain tumors. That clinical history gives companies like Gestala a foundation to build on. However, studying or interpreting brain signals with ultrasound is far more complex than delivering targeted stimulation.

WHAT TRUMP’S ‘RATEPAYER PROTECTION PLEDGE’ MEANS FOR YOU

Unlike implant-based systems such as Neuralink, ultrasound brain computer interface research focuses on stimulating the brain without surgery. (Neuralink)

Advertisement

 

How Gestala plans to treat chronic pain with focused ultrasound

Gestala’s first product is focused on chronic pain. The company plans to target the anterior cingulate cortex, a brain region linked to the emotional experience of pain. Early pilot studies suggest that stimulating this area can reduce pain intensity for up to a week in some patients. The first-generation device will be a stationary system used in clinics. Patients would visit a hospital for treatment sessions. Later, the company plans to develop a wearable helmet designed for supervised use at home. Over time, Gestala says it wants to expand into depression, other mental health conditions, stroke rehabilitation, Alzheimer’s disease and sleep disorders. That is an ambitious roadmap. Each condition involves different brain networks and clinical hurdles.

Can ultrasound read brain activity without implants?

Like other brain tech startups, Gestala is also exploring whether ultrasound could help interpret brain activity. The long-term concept is straightforward in theory. A device could detect patterns linked to chronic pain or depression, then deliver stimulation to specific regions in response.

Unlike traditional brain implants, which capture electrical signals from limited areas, an ultrasound-based system may have the potential to access broader regions of the brain. That possibility is one reason researchers are paying attention. Still, translating that concept into reliable data is a major engineering challenge.

The global race to build noninvasive brain interfaces

China is not alone in exploring ultrasound brain-computer interface systems. Earlier this month, OpenAI announced a significant investment in Merge Labs, a startup cofounded by Sam Altman along with researchers linked to Forest Neurotech.

Advertisement

Public materials from Merge Labs mention restoring lost abilities, supporting healthier brain states and deepening human connection with advanced AI. That language signals long-term ambitions. Yet experts caution that real-world applications are still years away.

GOOGLE DISMANTLES 9M-DEVICE ANDROID HIJACK NETWORK

Researchers use MRI guidance to precisely target the anterior cingulate cortex with focused ultrasound during chronic pain studies. (Gestala)

The technical limits of ultrasound brain interfaces

Ultrasound faces technical limits. First, the skull weakens and distorts sound waves. That makes it harder to obtain precise signals. In research settings, detailed readouts of neural activity have required special implants that allow ultrasound to pass more clearly than bone.

Second, ultrasound measures changes in blood flow. Blood flow shifts more slowly than electrical firing in neurons. That delay may limit applications that require fast, detailed signal decoding, such as real-time speech translation. In short, stimulation is one challenge. Accurate readout is another level entirely.

Advertisement

What this means to you

Right now, this technology is experimental. You are not about to buy a brain helmet at your local electronics store. Still, the direction matters. If noninvasive ultrasound devices can reduce chronic pain or support mental health treatment, more patients may consider therapy without facing brain surgery.

At the same time, devices that analyze brain states introduce new privacy questions. Brain-related data is deeply personal. Regulators, hospitals and companies will need clear rules about how that data is stored, shared and protected. Finally, the link between AI companies and brain interface startups shows how closely digital intelligence and neuroscience are becoming intertwined. That connection could reshape medicine, wellness, and even how we interact with technology.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Kurt’s key takeaways

Brain-computer interfaces used to feel far off and experimental. Now they are a serious focus of global research and investment. China’s push to develop an ultrasound-based brain-computer interface adds momentum to a field already shaped by companies like Neuralink and new ventures backed by OpenAI. Progress is steady but measured. The potential is significant. The technical hurdles are real. What happens next will depend on whether researchers can turn promising lab results into safe, reliable treatments people can actually use.

If sound waves could one day interpret your mental state, who should decide how that information is used? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com.  All rights reserved.  

Advertisement

Related Article

New York halts robotaxi expansion plan
Continue Reading

Technology

This Windows gaming handheld has a screen that folds in half

Published

on

This Windows gaming handheld has a screen that folds in half

Lenovo put a foldable display on a gaming handheld. The Legion Go Fold Concept is a Windows-based handheld with a flexible POLED display, detachable Joy-Con-like controllers, and a folio case to turn the whole thing into a mini laptop.

You can use it as a standard Steam Deck-esque handheld with the display folded down to 7.7 inches and controllers attached at its sides, or you can unfold it for a bigger experience. When unfolded, the controllers can be repositioned to all four sides, allowing you to play with the screen in vertical or horizontal orientations.

In vertical splitscreen mode, you can put your game on one half of the screen and a second window (like your chat or game guide) on the other half. Horizontal fullscreen mode gives your game the full 11.6 inches of real estate in a 16:10 aspect ratio. To go into laptop mode, you remove the controllers and mount the handheld into a folio case with a stand, built-in keyboard, and trackpad. The controllers can be put into a separate grip mount to unify them as one gamepad.

There are a lot of ways you can use this folding handheld, including turning one of its controllers into a vertical mouse like on other Legion Go handhelds, but there’s one thing it doesn’t do: fold down to close and protect its screen. The Go Fold only folds outwards, so don’t expect a Nintendo DS or GameBoy Advance-like clamshell that closes for portability. Instead, it’s all about getting bigger than your average gaming handheld and offering more. (Though we’ve tried bigger before.)

The Legion Go Fold has some formidable specs: an Intel Core Ultra 7 258V Lunar Lake processor, 32GB of RAM, 1TB of storage, and a 48Whr battery. The plastic-covered OLED has a resolution of 2435 x 1712 and 165Hz refresh rate. And there’s even a second, circular toushscreen on the right controller, under the face buttons. It doubles as a touchpad and can be a support display, allowing you to swipe between extracted UI elements from a game (which I wouldn’t expect to be widely supported), a clock, system monitoring, or an animated GIF (just for fun).

Advertisement

During my brief in-person demo I didn’t get to play any graphically-intense games — just Balatro, which can practically play on a potato. The screen looked plenty sharp, but like any foldable there’s a crease down the middle; it’s very visible, but you learn to look past it and ignore it after just a bit. The build and feel of the whole thing felt a little fragile, and detaching and reattaching the controllers was definitely janky. Build quality will hopefully be improved if this device ever actually makes it to market.

The laptop mode was a pleasant surprise for me though. I did not expect a gaming handheld to double as a conventional computer you could get work done on. The Legion Go Fold’s case took quite a bit of fumbling before I set it up correctly, but it shouldn’t take too long to get used to if you actually lived with it.

Then again, I don’t know if anyone is going to be able to live with this thing — ever. I’d love for the Legion Go Fold to go from concept to real product like other out-there Lenovo ideas, but I shudder to think what it might cost. The Legion Go 2 is already priced well over $1,000. And with the ongoing RAMageddon crisis we’re living through, there’s no telling how much more expensive an actual Legion Go Fold would be if it came out in a year or more.

But even if it’s not the kind of foldable I expected, and even though it may never come out, it’s certainly cool. Now somebody please make a folding PC handheld that goes from kinda-big to really small. I think that’d be the one for me.

Photography by Antonio G. Di Benedetto / The Verge

Advertisement
Continue Reading

Trending