Connect with us

Technology

Hollywood’s pivot to AI video has a prompting problem

Published

on

Hollywood’s pivot to AI video has a prompting problem

It has become almost impossible to browse the internet without having an AI-generated video thrust upon you. Open basically any social media platform, and it won’t be long until an uncanny-looking clip of a fake natural disaster or animals doing impossible things slides across your screen. Most of the videos look absolutely terrible. But they’re almost always accompanied by hundreds, if not thousands, of likes and comments from people insisting that AI-generated content is a new art form that’s going to change the world.

That has been especially true of AI clips that are meant to appear realistic. No matter how strange or aesthetically inconsistent the footage may be, there is usually someone proclaiming that it’s something the entertainment industry should be afraid of. The idea that AI-generated video is both the future of filmmaking and an existential threat to Hollywood has caught on like wildfire among boosters for the relatively new technology.

The thought of major studios embracing this technology as is feels dubious when you consider that, oftentimes, AI models’ output simply isn’t the kind of stuff that could be fashioned into a quality movie or series. That’s an impression that filmmaker Bryn Mooser wants to change with Asteria, a new production house he launched last year, as well as a forthcoming AI-generated feature film from Natasha Lyonne (also Mooser’s partner and an advisor at Late Night Labs, a studio focused on generative AI that Mooser’s film and TV company XTR acquired last year).

Asteria’s big selling point is that, unlike most other AI outfits, the generative model it built with research company Moonvalley is “ethical,” meaning it has only been trained on properly licensed material. Especially in the wake of Disney and Universal suing Midjourney for copyright infringement, the concept of ethical generative AI may become an important part of how AI is more widely adopted throughout the entertainment industry. However, during a recent chat, Mooser stresses to me that the company’s clear understanding of what generative AI is and what it isn’t helps set Asteria apart from other players in the AI space.

“As we started to think about building Asteria, it was obvious to us as filmmakers that there were big problems with the way that AI was being presented to Hollywood,” Mooser says. “It was obvious that the tools weren’t being built by anybody who’d ever made a film before. The text-to-video form factor, where you say ‘make me a new Star Wars movie’ and out it comes, is a thing that Silicon Valley thought people wanted and actually believed was possible.”

Advertisement

In Mooser’s view, part of the reason some enthusiasts have been quick to call generative video models a threat to traditional film workflows boils down to people assuming that footage created from prompts can replicate the real thing as effectively as what we’ve seen with imitative, AI-generated music. It has been easy for people to replicate singers’ voices with generative AI and produce passable songs. But Mooser thinks that, in its rush to normalize gen AI, the tech industry conflated audio and visual output in a way that’s at odds with what actually makes for good films.

“You can’t go and say to Christopher Nolan, ‘Use this tool and text your way to The Odyssey,’” Mooser says. “As people in Hollywood got access to these tools, there were a couple things that were really clear — one being that the form factor can’t work because the amount of control that a filmmaker needs comes down to the pixel level in a lot of cases.”

To give its filmmaking partners more of that granular control, Asteria uses its core generative model, Marey, to create new, project-specific models trained on original visual material. This would, for example, allow an artist to build a model that could generate a variety of assets in their distinct style, and then use it to populate a world full of different characters and objects that adhere to a unique aesthetic. That was the workflow Asteria used in its production of musician Cuco’s animated short “A Love Letter to LA.” By training Asteria’s model on 60 original illustrations drawn by artist Paul Flores, the studio could generate new 2D assets and convert them into 3D models used to build the video’s fictional town. The short is impressive, but its heavy stylization speaks to the way projects with generative AI at their core often have to work within the technology’s visual limitations. It doesn’t feel like this workflow offers control down to the pixel level just yet.

Mooser says that, depending on the financial arrangement between Asteria and its clients, filmmakers can retain partial ownership of the models after they’re completed. In addition to the original licensing fees Asteria pays the creators of the material its core model is trained on, the studio is “exploring” the possibility of a revenue sharing system, too. But for now, Mooser is more focused on winning artists over with the promise of lower initial development and production costs.

“If you’re doing a Pixar animated film, you might be coming on as a director or a writer, but it’s not often that you’ll have any ownership of what you’re making, residuals, or cut of what the studio makes when they sell a lunchbox,” Mooser tells me. “But if you can use this technology to bring the cost down and make it independently financeable, then you have a world where you can have a new financing model that makes real ownership possible.”

Advertisement

Asteria plans to test many of Mooser’s beliefs in generative AI’s transformative potential with Uncanny Valley, a feature film to be co-written and directed by Lyonne. The live-action film centers on a teenage girl whose shaky perception of reality causes her to start seeing the world as being more video game-like. Many of Uncanny Valley’s fantastical, Matrix-like visual elements will be created with Asteria’s in-house models. That detail in particular makes Uncanny Valley sound like a project designed to present the hallucinatory inconsistencies that generative AI has become known for as clever aesthetic features rather than bugs. But Mooser tells me that he hopes “nobody ever thinks about the AI part of it at all” because “everything is going to have the director’s human touch on it.”

“It’s not like you’re just texting, ‘then they go into a video game,’ and watch what happens, because nobody wants to see that,” Mooser says. “That was very clear as we were thinking about this. I don’t think anybody wants to just see what computers dream up.”

Like many generative AI advocates, Mooser sees the technology as a “democratizing” tool that can make the creation of art more accessible. He also stresses that, under the right circumstances, generative AI could make it easier to produce a movie for around $10–20 million rather than $150 million. Still, securing that kind of capital is a challenge for most younger, up-and-coming filmmakers.

One of Asteria’s big selling points that Mooser repeatedly mentions to me is generative AI’s potential to produce finished works faster and with smaller teams. He framed that aspect of an AI production workflow as a positive that would allow writers and directors to work more closely with key collaborators like art and VFX supervisors without needing to spend so much time going back and forth on revisions — something that tends to be more likely when a project has a lot of people working on it. But, by definition, smaller teams translates to fewer jobs, which raises the issue of AI’s potential to put people out of work. When I bring this up with Mooser, he points to the recent closure of VFX house Technicolor Group as an example of the entertainment industry’s ongoing upheaval that began leaving workers unemployed before the generative AI hype came to its current fever pitch.

Mooser was careful not to downplay that these concerns about generative AI were a big part of what plunged Hollywood into a double strike back in 2023. But he is resolute in his belief that many of the industry’s workers will be able to pivot laterally into new careers built around generative AI if they are open to embracing the technology.

Advertisement

“There are filmmakers and VFX artists who are adaptable and want to lean into this moment the same way people were able to switch from editing on film to editing on Avid,” Mooser says. “People who are real technicians — art directors, cinematographers, writers, directors, and actors — have an opportunity with this technology. What’s really important is that we as an industry know what’s good about this and what’s bad about this, what is helpful for us in trying to tell our stories, and what is actually going to be dangerous.”

What seems rather dangerous about Hollywood’s interest in generative AI isn’t the “death” of the larger studio system, but rather this technology’s potential to make it easier for studios to work with fewer actual people. That’s literally one of Asteria’s big selling points, and if its workflows became the industry norm, it is hard to imagine it scaling in a way that could accommodate today’s entertainment workforce transitioning into new careers. As for what’s good about it, Mooser knows the right talking points. Now he has to show that his tech — and all the changes it entails — can work.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Technology

Cyberpunk Edgerunners 2 will be even sadder and bloodier

Published

on

Cyberpunk Edgerunners 2 will be even sadder and bloodier

The new season will be directed by Kai Ikarashi, who also directed episode six in the first season, “Girl on Fire.” There’s no word yet on when Cyberpunk: Edgerunners 2 will premiere, but they did show off this new poster artwork. A trailer will be shown later tonight during a panel at 8:30PM PT for the animation studio, Trigger.

Showrunner and writer Bartosz Sztybor said during Friday’s panel that for season one, “I just wanted to make the whole world sad… when people are sad, I’m a bit happy,” and that this new 10-episode season will be “…of course, sadder, but it will be also darker, more bloody, and more raw.”

A brief summary of the follow-up series tells fans what to expect following the end of David’s story in season one:

Cyberpunk: Edgerunners 2 presents a new standalone 10-episode story from the world of Cyberpunk 2077— a raw chronicle of redemption and revenge. In a city that thrives in the spotlight of violence, one question remains: when the world is blinded by spectacle, what extremes do you have to go to make your story matter?

Continue Reading

Technology

How Google’s ‘Ask Photos’ uses AI to find the pictures you want

Published

on

How Google’s ‘Ask Photos’ uses AI to find the pictures you want

NEWYou can now listen to Fox News articles!

Google Photos has always been a handy way to store and organize your pictures, but its latest feature, Ask Photos, is taking things to a whole new level. 

Powered by Google’s Gemini AI, Ask Photos lets you search your photo library using natural language. Let’s take a look at what makes Google Photos AI search so different, what’s improved and how it could change the way you interact with your memories.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

META AI’S NEW CHATBOT RAISES PRIVACY ALARMS

Advertisement

Google Photos’ “Ask Photos” with Gemini (Google)

What is Google Photos’ AI search?

Ask Photos is Google’s new AI-powered search tool inside Google Photos. Instead of typing simple keywords or scrolling endlessly, you can now ask complex questions. Some examples are, “Show me the best photo from each national park I’ve visited,” or “What did I eat on my trip to Italy?” The AI understands context, dates, locations and even themes, making it easier to find exactly what you’re looking for.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

How does Ask Photos work?

Ask Photos uses the Gemini AI model, designed specifically for understanding the content and context of your images. When you ask a question, Gemini analyzes your photos, looking at things like location, people and even the quality of each shot. For example, if you ask for the best birthday party photos, it can identify party themes and highlight your favorite moments.

You can use Ask Photos for both simple and complex searches:

Advertisement
  • Simple: “Show me pictures of my dog.”
  • Complex: “Find all the photos from 2025 when I had short hair.”
  • Contextual: “Remind me what themes we’ve had for Lena’s birthday parties?”
ask photos 2

Google Photos’ “Ask Photos” with Gemini (Google)

What’s new and improved?

After pausing the rollout earlier this year to address speed and quality issues, Google resumed and expanded Ask Photos to more users in the U.S. Now, Ask Photos displays classic search results alongside Gemini AI results on a single page, streamlining your search experience. Simple searches like “cats” or “nature” deliver instant results, while complex queries return faster and more accurate answers. If you prefer classic search, you can opt out of Ask Photos at any time by visiting your app settings and toggling off the “Search with Ask Photos” feature. This flexibility lets you search the way you want.

Availability and privacy

Ask Photos rolls out to more eligible users in the U.S., beyond early access testers. Requirements include being 18 or older, using English (U.S.) as your account language and enabling Face Groups. Google confirms your private photos remain untouched by advertising, and only specific queries may be reviewed to improve the service. Your answers stay private unless you contact support.

ask photos 3

Google Photos’ “Ask Photos” with Gemini   (Google)

Kurt’s key takeaways

Google Photos AI search is making it easier than ever to find specific memories, whether you’re looking for a single photo or trying to remember the details of a special event. With natural language search and the power of Gemini AI, Ask Photos could become the smartest way to browse your photo library.

Advertisement

How comfortable are you with AI analyzing your personal photos, and where do you draw the line between convenience and privacy? Let us know by writing to us at Cyberguy.com/Contact

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

Copyright 2025 CyberGuy.com.  All rights reserved.  

Continue Reading

Technology

Meet Soham Parekh, the engineer burning through tech by working at three to four startups simultaneously

Published

on

Meet Soham Parekh, the engineer burning through tech by working at three to four startups simultaneously

One name is popping up a lot across tech startup social media right now, and you might’ve heard it: Soham Parekh. On X, people are joking that Parekh is single-handedly holding up all modern digital infrastructure, while others are posting memes about him working in front of a dozen different monitors or filling in for the thousands of people that Microsoft just laid off.

From what social media posts suggest, Parekh is actually a software engineer who seems to have interviewed at dozens of tech startups over the years, while also juggling multiple jobs at the same time. Several startups had this revelation on July 2nd, when Suhail Doshi, founder of the AI design tool Playground, posted a PSA on X, saying:

PSA: there’s a guy named Soham Parekh (in India) who works at 3-4 startups at the same time. He’s been preying on YC companies and more. Beware.

I fired this guy in his first week and told him to stop lying / scamming people. He hasn’t stopped a year later. No more excuses.

Doshi’s post was quickly flooded with replies that included similar stories. “We interviewed this guy too, but caught this during references checks,” Variant founder Ben South said. “Turns out he had 5-6 profiles each with 5+ places he actually worked at.” When asked what tipped him off about Parekh, South told The Verge that his suspicions arose during Parekh’s interview, prompting his team to do a reference check earlier than they usually would. “That’s when we learned he was working multiple jobs,” South said.

Parekh’s resume and pitch email look good at first glance, which helps him garner interest from multiple companies. “He had a prolific GitHub contribution graph and prior startup experience,” Marcus Lowe, founder of the AI app builder Create, told The Verge. “He was also extremely technically strong during our interview process.”

Advertisement

Just one day after this all unfolded, Parekh came forward in an interview with the daily tech show TBPN. Parekh confirmed what many tech startup founders had suspected: he had been working for multiple companies at the same time. “I’m not proud of what I’ve done. That’s not something I endorse either. But no one really likes to work 140 hours a week, I had to do it out of necessity,” Parekh said. “I was in extremely dire financial circumstances.”

Parekh seems to have made a good first impression on many people. Digger CEO Igor Zalutski said his company “nearly hired him,” as he “seemed so sharp” during interviews, while AIVideo.com cofounder Justin Harvey similarly said that he was “THIS close to hiring him,” adding that “he actually crushed the interview.” Vapi cofounder Jordan Dearsley said Parekh “was the best technical interview” he’s seen, but he “did not deliver on his projects.”

The startups that did hire Parekh didn’t seem to keep him around for long. Lowe said that he noticed something was off when Parekh kept making excuses to push back his start date. After telling Lowe that he had to delay working because he had a trip planned to see his sister in New York, Parekh later claimed that he couldn’t start working following the trip because he was sick. “For whatever reason, something just felt off,” Lowe said.

That’s when Lowe visited Parekh’s GitHub profile and realized he was committing code to a private repository during the time he was supposed to be sick. Lowe also found recent commits to another San Francisco-based startup. “Did some digging, noticed that he was in some of their marketing materials,” Lowe said. “I was like, ‘Huh, but he didn’t declare this on his resume. This feels weird.’” Create ended up letting Parekh go after he failed to complete an assignment.

It looks like Parekh even had a stint at Meta. In 2021, the company published a post highlighting his story as a contributor working on mixed-reality experiences in WebXR. In the post, Parekh said that he found “that the best way to get better at software development is to not only practice it but to use it to solve real world problems.” Meta didn’t immediately respond to The Verge’s request for comment.

Advertisement

Parekh’s purported scheme may have been uncovered, but his outlook might not be all bad — if you believe him. Parekh claims he landed a job at Darwin, an AI video remixing startup. “Earlier today, I signed an exclusive founding deal to be founding engineer at one company and one company only,” Parekh posted on X. “They were the only ones willing to bet on me at this time.”

Continue Reading

Trending