Connect with us

Technology

Hollywood’s pivot to AI video has a prompting problem

Published

on

Hollywood’s pivot to AI video has a prompting problem

It has become almost impossible to browse the internet without having an AI-generated video thrust upon you. Open basically any social media platform, and it won’t be long until an uncanny-looking clip of a fake natural disaster or animals doing impossible things slides across your screen. Most of the videos look absolutely terrible. But they’re almost always accompanied by hundreds, if not thousands, of likes and comments from people insisting that AI-generated content is a new art form that’s going to change the world.

That has been especially true of AI clips that are meant to appear realistic. No matter how strange or aesthetically inconsistent the footage may be, there is usually someone proclaiming that it’s something the entertainment industry should be afraid of. The idea that AI-generated video is both the future of filmmaking and an existential threat to Hollywood has caught on like wildfire among boosters for the relatively new technology.

The thought of major studios embracing this technology as is feels dubious when you consider that, oftentimes, AI models’ output simply isn’t the kind of stuff that could be fashioned into a quality movie or series. That’s an impression that filmmaker Bryn Mooser wants to change with Asteria, a new production house he launched last year, as well as a forthcoming AI-generated feature film from Natasha Lyonne (also Mooser’s partner and an advisor at Late Night Labs, a studio focused on generative AI that Mooser’s film and TV company XTR acquired last year).

Asteria’s big selling point is that, unlike most other AI outfits, the generative model it built with research company Moonvalley is “ethical,” meaning it has only been trained on properly licensed material. Especially in the wake of Disney and Universal suing Midjourney for copyright infringement, the concept of ethical generative AI may become an important part of how AI is more widely adopted throughout the entertainment industry. However, during a recent chat, Mooser stresses to me that the company’s clear understanding of what generative AI is and what it isn’t helps set Asteria apart from other players in the AI space.

“As we started to think about building Asteria, it was obvious to us as filmmakers that there were big problems with the way that AI was being presented to Hollywood,” Mooser says. “It was obvious that the tools weren’t being built by anybody who’d ever made a film before. The text-to-video form factor, where you say ‘make me a new Star Wars movie’ and out it comes, is a thing that Silicon Valley thought people wanted and actually believed was possible.”

Advertisement

In Mooser’s view, part of the reason some enthusiasts have been quick to call generative video models a threat to traditional film workflows boils down to people assuming that footage created from prompts can replicate the real thing as effectively as what we’ve seen with imitative, AI-generated music. It has been easy for people to replicate singers’ voices with generative AI and produce passable songs. But Mooser thinks that, in its rush to normalize gen AI, the tech industry conflated audio and visual output in a way that’s at odds with what actually makes for good films.

“You can’t go and say to Christopher Nolan, ‘Use this tool and text your way to The Odyssey,’” Mooser says. “As people in Hollywood got access to these tools, there were a couple things that were really clear — one being that the form factor can’t work because the amount of control that a filmmaker needs comes down to the pixel level in a lot of cases.”

To give its filmmaking partners more of that granular control, Asteria uses its core generative model, Marey, to create new, project-specific models trained on original visual material. This would, for example, allow an artist to build a model that could generate a variety of assets in their distinct style, and then use it to populate a world full of different characters and objects that adhere to a unique aesthetic. That was the workflow Asteria used in its production of musician Cuco’s animated short “A Love Letter to LA.” By training Asteria’s model on 60 original illustrations drawn by artist Paul Flores, the studio could generate new 2D assets and convert them into 3D models used to build the video’s fictional town. The short is impressive, but its heavy stylization speaks to the way projects with generative AI at their core often have to work within the technology’s visual limitations. It doesn’t feel like this workflow offers control down to the pixel level just yet.

Mooser says that, depending on the financial arrangement between Asteria and its clients, filmmakers can retain partial ownership of the models after they’re completed. In addition to the original licensing fees Asteria pays the creators of the material its core model is trained on, the studio is “exploring” the possibility of a revenue sharing system, too. But for now, Mooser is more focused on winning artists over with the promise of lower initial development and production costs.

“If you’re doing a Pixar animated film, you might be coming on as a director or a writer, but it’s not often that you’ll have any ownership of what you’re making, residuals, or cut of what the studio makes when they sell a lunchbox,” Mooser tells me. “But if you can use this technology to bring the cost down and make it independently financeable, then you have a world where you can have a new financing model that makes real ownership possible.”

Advertisement

Asteria plans to test many of Mooser’s beliefs in generative AI’s transformative potential with Uncanny Valley, a feature film to be co-written and directed by Lyonne. The live-action film centers on a teenage girl whose shaky perception of reality causes her to start seeing the world as being more video game-like. Many of Uncanny Valley’s fantastical, Matrix-like visual elements will be created with Asteria’s in-house models. That detail in particular makes Uncanny Valley sound like a project designed to present the hallucinatory inconsistencies that generative AI has become known for as clever aesthetic features rather than bugs. But Mooser tells me that he hopes “nobody ever thinks about the AI part of it at all” because “everything is going to have the director’s human touch on it.”

“It’s not like you’re just texting, ‘then they go into a video game,’ and watch what happens, because nobody wants to see that,” Mooser says. “That was very clear as we were thinking about this. I don’t think anybody wants to just see what computers dream up.”

Like many generative AI advocates, Mooser sees the technology as a “democratizing” tool that can make the creation of art more accessible. He also stresses that, under the right circumstances, generative AI could make it easier to produce a movie for around $10–20 million rather than $150 million. Still, securing that kind of capital is a challenge for most younger, up-and-coming filmmakers.

One of Asteria’s big selling points that Mooser repeatedly mentions to me is generative AI’s potential to produce finished works faster and with smaller teams. He framed that aspect of an AI production workflow as a positive that would allow writers and directors to work more closely with key collaborators like art and VFX supervisors without needing to spend so much time going back and forth on revisions — something that tends to be more likely when a project has a lot of people working on it. But, by definition, smaller teams translates to fewer jobs, which raises the issue of AI’s potential to put people out of work. When I bring this up with Mooser, he points to the recent closure of VFX house Technicolor Group as an example of the entertainment industry’s ongoing upheaval that began leaving workers unemployed before the generative AI hype came to its current fever pitch.

Mooser was careful not to downplay that these concerns about generative AI were a big part of what plunged Hollywood into a double strike back in 2023. But he is resolute in his belief that many of the industry’s workers will be able to pivot laterally into new careers built around generative AI if they are open to embracing the technology.

Advertisement

“There are filmmakers and VFX artists who are adaptable and want to lean into this moment the same way people were able to switch from editing on film to editing on Avid,” Mooser says. “People who are real technicians — art directors, cinematographers, writers, directors, and actors — have an opportunity with this technology. What’s really important is that we as an industry know what’s good about this and what’s bad about this, what is helpful for us in trying to tell our stories, and what is actually going to be dangerous.”

What seems rather dangerous about Hollywood’s interest in generative AI isn’t the “death” of the larger studio system, but rather this technology’s potential to make it easier for studios to work with fewer actual people. That’s literally one of Asteria’s big selling points, and if its workflows became the industry norm, it is hard to imagine it scaling in a way that could accommodate today’s entertainment workforce transitioning into new careers. As for what’s good about it, Mooser knows the right talking points. Now he has to show that his tech — and all the changes it entails — can work.

Technology

Fired Rockstar employees’ plea for interim pay denied

Published

on

Fired Rockstar employees’ plea for interim pay denied

A UK employment tribunal rejected a request from fired Rockstar Games employees to receive interim pay while waiting for a full hearing about their dismissal, according to Bloomberg and IGN. After Rockstar fired 34 employees last year — 31 from the UK and three from Canada — the Independent Workers’ Union of Great Britain (IWGB) accused the company of “union busting.” Rockstar claims that the fired employees were leaking company information in a Discord channel.

The hearing took place over two days last week. “Despite being refused interim relief today, we’ve come out of last week’s hearing more confident than ever that a full and substantive tribunal will find Rockstar’s calculated attempt to crush a union to be not only unjust but unlawful,” IWGB president Alex Marshall says in a statement. “The fact that we were granted this hearing speaks to the strength of our case and, over the course of the two-day hearing, Rockstar consistently failed to back up claims made in the press or to refute that they acted unfairly, maliciously, and in breach of their own procedures.”

“We regret that we were put in a position where dismissals were necessary, but we stand by our course of action as supported by the outcome of this hearing,” a Rockstar Games spokesperson says in statements to Bloomberg and IGN. Rockstar and owner Take-Two didn’t immediately reply to a request for comment.

Rockstar is working on Grand Theft Auto VI, which was recently delayed from a planned May launch to November 19th.

Continue Reading

Technology

Why your Android TV box may secretly be a part of a botnet

Published

on

Why your Android TV box may secretly be a part of a botnet

NEWYou can now listen to Fox News articles!

Android TV streaming boxes that promise “everything for one price” are everywhere right now. 

You’ll see them on big retail sites, in influencer videos, and even recommended by friends who swear they’ve cut the cord for good. And to be fair, they look irresistible on paper, offering thousands of channels for a one-time payment. But security researchers are warning that some of these boxes may come with a hidden cost.

In several cases, devices sold as simple media streamers appear to quietly turn your home internet connection into part of larger networks used for shady online activity. And many buyers have no idea it’s happening.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

WHY JANUARY IS THE BEST TIME TO REMOVE PERSONAL DATA ONLINE

Android TV streaming boxes promising unlimited channels for a one-time fee may quietly turn home internet connections into proxy networks, according to security researchers. (Photo By Paul Chinn/The San Francisco Chronicle via Getty Images)

What’s inside these streaming boxes

According to an investigation by Krebs on Security, media streaming devices don’t behave like ordinary media streamers once they’re connected to your network. Researchers closely examine SuperBox, which is an Android-based streaming box sold through third-party sellers on major retail platforms. On paper, SuperBox markets itself as just hardware. The company claims it doesn’t pre-install pirated apps and insists users are responsible for what they install. That sounds reassuring until you look at how the device actually works.

To unlock the thousands of channels SuperBox advertises, you must first remove Google’s official app ecosystem and replace it with an unofficial app store. That step alone should raise eyebrows. Once those custom apps are installed, the device doesn’t just stream video but also begins routing internet traffic through third-party proxy networks.

What this means is that your home internet connection may be used to relay traffic for other people. That traffic can include ad fraud, credential stuffing attempts and large-scale web scraping.

Advertisement

During testing by Censys, a cyber intelligence company that tracks internet-connected devices, SuperBox models immediately contacted servers tied to Tencent’s QQ messaging service, run by Tencent, as well as a residential proxy service called Grass.

Grass describes itself as an opt-in network that lets you earn rewards by sharing unused internet bandwidth. This suggests that SuperBox devices may be using SDKs or tooling that hijack bandwidth without clear user consent, effectively turning the box into a node inside a proxy network.

Why SuperBox activity resembles botnet behavior

In simple terms, a botnet is a large group of compromised devices that work together to route traffic or perform online tasks without the owners realizing it.

Researchers discovered SuperBox devices contained advanced networking and remote access tools that have no business being on a streaming box. These included utilities like Tcpdump and Netcat, which are commonly used for network monitoring and traffic interception.

The devices performed DNS hijacking and ARP poisoning on local networks, techniques used to redirect traffic and impersonate other devices on the same network. Some models even contained directories labeled “secondstage,” suggesting additional payloads or functionality beyond streaming.

Advertisement

SuperBox is just one brand in a crowded market of no-name Android streaming devices. Many of them promise free content and quick setup, but often come preloaded with malware or require unofficial app stores that expose users to serious risk.

In July 2025, Google filed a lawsuit against operators behind what it called the BADBOX 2.0 botnet, a network of more than ten million compromised Android devices. These devices were used for advertising fraud and proxy services, and many were infected before consumers even bought them.

Around the same time, the Feds warned that compromised streaming and IoT devices were being used to gain unauthorized access to home networks and funnel traffic into criminal proxy services.

We reached out to SuperBox for comment but did not receive a response before our deadline.

8 steps you can take to protect yourself

If you already own one of these streaming boxes or are thinking about buying one, these steps can help reduce your risk significantly.

Advertisement

1) Avoid devices that require unofficial app stores

If a streaming box asks you to remove Google Play or install apps from an unknown marketplace, stop right there. This bypasses Android’s built-in security checks and opens the door to malicious software. Legitimate Android TV devices don’t require this.

2) Use strong antivirus software on your devices

Even if the box itself is compromised, strong antivirus software on your computers and phones can detect suspicious network behavior, malicious connections or follow-on attacks like credential stuffing. Strong antivirus software monitors behavior, not just files, which matters when malware operates quietly in the background. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

3) Put streaming devices on a separate or guest network

If your router supports it, isolate smart TVs and streaming boxes from your main network. This prevents a compromised device from seeing your laptops, phones or work systems. It’s one of the simplest ways to limit damage if something goes wrong.

4) Use a password manager

If your internet connection is being abused, stolen credentials often come next. A password manager ensures every account uses a unique password, so one leak doesn’t unlock everything. Many password managers also refuse to autofill on suspicious or fake websites, which can alert you before you make a mistake.

Advertisement

MAKE 2026 YOUR MOST PRIVATE YEAR YET BY REMOVING BROKER DATA

Investigators warn some Android-based streaming boxes route user bandwidth through third-party servers linked to ad fraud and cybercrime. (Photo Illustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images)

Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

5) Consider using a VPN for sensitive activity

A VPN won’t magically fix a compromised device, but it can reduce exposure by encrypting your traffic when browsing, banking or working online. This makes it harder for third parties to inspect or misuse your data if your network is being relayed.

Advertisement

For the best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android and iOS devices at Cyberguy.com.

6) Watch your internet usage and router activity

Unexpected spikes in bandwidth, slower speeds or strange outbound connections can be warning signs. Many routers show connected devices and traffic patterns.

If you notice suspicious traffic or behavior, unplug the streaming box immediately and perform a factory reset on your router. In some cases, the safest option is to stop using the device altogether.

Also, make sure your router firmware is up to date and that you’ve changed the default admin password. Compromised devices often try to exploit weak router settings to persist on a network.

7) Be wary of “free everything” streaming promises

Unlimited premium channels for a one-time fee usually mean you’re paying in some other way, often with your data, bandwidth or legal exposure. If a deal sounds too good to be true, it usually is.

Advertisement

8) Consider a data removal service

If your internet connection or accounts have been abused, your personal details may already be circulating among data brokers. A data removal service can help opt you out of people-search sites and reduce the amount of personal information criminals can exploit for follow-up scams or identity theft. While it won’t fix a compromised device, it can limit long-term exposure.

10 SIMPLE CYBERSECURITY RESOLUTIONS FOR A SAFER 2026

Cyber experts say certain low-cost streaming devices behave more like botnet nodes than legitimate media players once connected to home networks. (Photo by Alessandro Di Ciommo/NurPhoto via Getty Images)

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

Advertisement

Kurt’s key takeaway

Streaming boxes like SuperBox thrive on frustration. As subscriptions pile up, people look for shortcuts. But when a device promises everything for nothing, it’s worth asking what it’s really doing behind the scenes. Research shows that some of these boxes don’t just stream TV. They quietly turn your home network into a resource for others, sometimes for criminal activity. Cutting the cord shouldn’t mean giving up control of your internet connection. Before plugging in that “too good to be true” box, it’s worth slowing down and looking a little closer.

Would you still use a streaming box if it meant sharing your internet with strangers? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement

Continue Reading

Technology

Anthropic wants you to use Claude to ‘Cowork’ in latest AI agent push

Published

on

Anthropic wants you to use Claude to ‘Cowork’ in latest AI agent push

Anthropic wants to expand Claude’s AI agent capabilities and take advantage of the growing hype around Claude Code — and it’s doing it with a brand-new feature released Monday, dubbed “Claude Cowork.”

“Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks,” Anthropic wrote in a blog post. The company is releasing it as a “research preview” so the team can learn more about how people use it and continue building accordingly. So far, Cowork is only available via Claude’s macOS app, and only for subscribers of Anthropic’s power-user tier, Claude Max, which costs $100 to $200 per month depending on usage.

Here’s how Claude Cowork works: A user gives Claude access to a folder on their computer, allowing the chatbot to read, edit, or create files. (Examples Anthropic gave included the ability fo “re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.”) Claude will provide regular updates on what it’s working on, and users can also use existing connectors to link it to external info (like Asana, Notion, PayPal, and other supported partners) or link it to Claude in Chrome for browser-related tasks.

“You don’t need to keep manually providing context or converting Claude’s outputs into the right format,” Anthropic wrote. “Nor do you have to wait for Claude to finish before offering further ideas or feedback: you can queue up tasks and let Claude work through them in parallel. It feels much less like a back-and-forth and much more like leaving messages for a coworker.”

The new feature is part of Anthropic’s (and its competitors’) bid to provide the most actually useful AI agents, both for consumers and enterprise. AI agents have come a long way from their humble beginnings as mostly-theoretically-useful tools, but there’s still much more development needed before you’ll see your non-tech-industry friends using them to complete everyday tasks.

Advertisement

Anthropic’s “Skills for Claude,” announced in October, was a partial precursor to Cowork. Starting in October, Claude could improve at personalized tasks and jobs, by way of “folders that include instructions, scripts, and resources that Claude can load when needed to make it smarter at specific work tasks — from working with Excel [to] following your organization’s brand guidelines,” per a release at the time. People could also build their own Skills for Claude relative to their specific jobs and tasks they needed to be completed.

As part of the announcement, Anthropic warned about the potential dangers of using Cowork and other AI agent tools, namely the fact that if instructions aren’t clear, Claude does have the ability to delete local files and take other “potentially destructive actions” — and that with prompt injection attacks, there are a range of potential safety concerns. Prompt injection attacks often involve bad actors hiding malicious text in a website that the model is referencing, which instructs the model to bypass its safeguards and do something harmful, such as hand over personal data. “Agent safety — that is, the task of securing Claude’s real-world actions — is still an active area of development in the industry,” Anthropic wrote.

Claude Max subscribers try out the new feature by clicking on “Cowork” in the sidebar of the macOS app. Other users can join the waitlist.

Continue Reading

Trending