This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the intersection of entertainment and technology, follow Charles Pulliam-Moore. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

In just a few short years, text-to-image models went from only being able to churn out smudgy, “dreamlike” approximations of their input to producing detailed visualizations of whatever you describe. As the AI image generators got better, text-to-video models like Runway’s Gen series, Meta’s Make-A-Video, and Google’s Veo also came into their own. Now, some of Hollywood’s biggest studios have signaled that they are ready to start riding the gen AI wave.

Even with all of the obvious concerns about copyright infringement and job displacement that generative AI presents, a steady chorus of voices has been insisting that this technology is going to be the future of filmmaking. A lot of gen AI supporters see it as a tool that’s “democratizing” art by lowering traditional barriers to entry like “learning how to draw,” “learning how to play an instrument,” or “learning how to write a story.”

And even though much of what we’ve seen out of the AI generated video space hasn’t been especially good, more and more entertainment studios seem to be betting on this technology to pay off (for them, especially) so long as everyone commits to it and ignores all of the potential harms that come along with it.

The use of AI in TV and film production isn’t exactly a new thing, but studios have largely been loath to talk openly about it. Industrial Light & Magic used an AI tool to help de-age actors in Martin Scorsese’s The Irishman (2019), and Marvel Studios’ Shang-Chi and the Legend of the Ten Rings (2021) used machine learning to superimpose actors’ faces onto their stunt doubles. Machine learning was also used in the production of Thor: Love and Thunder (2022), and Marvel opted to use Vanity AI to tweak the appearances of actors in Ant-Man and the Wasp: Quantumania (2023).

The general public might not have been clued into the increasingly widespread use of AI in Hollywood. But many actors and writers, concerned about how the technology might impact their careers, were. That’s part of what led to 2023’s dual entertainment strikes, which began just weeks before Disney / Marvel made it very clear through Secret Invasion that they were down to clown with (very ugly) generative AI. And while the strikes came to an end with a modicum of protections against AI put in place, studios continued to use it while making films like Indiana Jones and the Dial of Destiny, Dune: Part Two, Late Night With the Devil, Alien: Romulus, and The Brutalist — for which Adrien Brody won a Best Actor Oscar.

There are still a number of glaring limitations that make gen AI feel undercooked and ill suited for robust video production workflows. Most models can only create a few seconds of footage that tend to be inconsistent with their visual details, and they do not offer much in the way of fine-tune controls over their output. But that is not stopping Silicon Valley heavyweights and a number of AI startups from trying to deeply entrench themselves in the entertainment industry.

Over the past few months, major players in the gen AI space, including OpenAI, Google, and Meta, have been meeting with film studios in hopes of establishing close working relationships. Lionsgate, for example, signed a deal with Runway to produce an in-house generative AI model trained on the studio’s port of films. In late July, Amazon invested in Showrunner, a company that bills itself as the “Netflix of AI” and specializes in clunky, user-created animation generated with text prompts. And earlier this month, OpenAI announced its plans to produce a feature-length movie called Critterz that is meant to convince studios that they can and should produce projects entirely with gen AI.

There has also been a sharp uptick in partnerships / collaborations between established filmmakers — David Goyer, Darren Aronofsky, and James Cameron immediately come to mind — and outfits who are presenting AI-centric workflows as a solution to some of the industry’s larger ongoing issues with ballooning budgets that studios are struggling to make back against a generally depressed box office.

Pretty much across the board, production has been down in Hollywood since 2024, and studios like Disney, Paramount, and Warner Bros. Discovery that were once willing to pour money into their streaming services have since shifted to reining in their spending. All of this has made it harder for new productions to secure greenlights, and there’s no real consensus about how the industry could get itself into a more stable position. But studios like Asteria are trying to make the case that, by using gen AI, they can bring production costs down so much that filmmakers can get projects off the ground on their own.

We have yet to see anything beyond concept art for Critterz, and we don’t know what an Asteria film filled with AI-generated assets might look like on the big screen. But we have seen other studios, like Netflix, openly embrace the technology for production purposes specifically because it’s cost-saving. Deals like Lionsgate and Runway’s haven’t become commonplace just yet, perhaps because that specific collaboration has been plagued with technical issues. This past summer, Lionsgate bragged that its gen AI model could cook up an anime adaptation of one of its live-action films in just a few hours. But since then, the studio has reportedly found that Runway’s tech simply cannot do that because Lionsgate’s entire portfolio of IP does not give the model a large enough dataset to generate usable output.

But Google’s push to have its name and gen AI tech attached to projects like Ancestra — a middling indie short film that mostly consists of shots that look like machine-produced stock footage — feels like a clear sign of Silicon Valley’s intention to will more of these partnerships into existence.

In order for more of those partnerships to happen, though, AI companies are going to have to figure out what to do about the lawsuits studios like Disney and Universal are slapping them with for copyright infringement. Generative models are only as good as the data they’re trained on, and many of the more popular ones seem like they were created with the assumption that their output would be so impressive that people simply wouldn’t care whether they were created unscrupulously. But megacorporations do care when their IP is stolen. And, more importantly, some filmmakers have been sounding the alarm about how the industry’s early dabblings with gen AI have already “supported the elimination, reduction, or consolidation of jobs” in the industry. It’s hard to imagine this technology becoming part of the Hollywood machine without putting a wide array of artists out of work.

A machine that can crank out an endless stream of concept art for pennies on the dollar might sound like a dream to studio heads. But to the people who once would have been hired to conceptualize and draw those images, gen AI represents a much more existential threat.

  • A lot of gen AI boosters have been quick to say that in addition to making art more “accessible,” this tech will enable studios to produce projects more cheaply. But it’s worth noting that OpenAI’s Critterz project will reportedly cost $30 million to produce. That’s quite a bit more than the $4 million Dream Well Studio spent making Flow using Blender — a free, open-source program — before the movie went on to win an Oscar.
  • Julian Glander’s Boys Go to Jupiter is another great example of how surprisingly powerful a program Blender really is. When I spoke with Glander recently about the movie, he told me that if you really want to see an example of tech democratizing (read: helping people learn hot to make) art, the Blender community is a pretty good place to start.
  • You really should check out The Wrap’s report on why Lionsgate and Runway’s gen AI collaboration isn’t working out quite the way the two companies were hoping for. It’s telling that being trained on the studio’s catalog of films and series isn’t enough for the model to produce quality footage that can be used to make new projects. But what’s even more interesting is how that same technological limitation might also apply to models trained on larger portfolios like Disney’s.
  • It’s worth hearing how some of the people behind these gen AI startups talk about this technology because it quickly becomes clear how all of this hype is really just a form of iffy advertising. My recent interviews with Asteria co-founder Bryn Mooser and Fable CEO Edward Saatchi can give you a pretty solid idea of how a lot of these founders are trying to sell products that seem, among other things, pretty half baked.
  • Before you head to Las Vegas and drop a couple hundred bucks to see the AI-upscaled Wizard of Oz as the Sphere, you should probably read David Ehrlich’s IndieWire piece about how the project is more of a mutilation than a restoration of Victor Fleming’s 1939 classic.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Share.
Exit mobile version