AI startup Runway says its latest AI video model can generate consistent scenes and people across multiple shots, according to an announcement. AI-generated videos can struggle with maintaining consistent storytelling, but Runway claims on X that the new model, Gen-4, should allow users more “continuity and control” while telling stories.

Currently rolling out to paid and enterprise users, the new Gen-4 video synthesis model allows users to generate characters and objects across shots using a single reference image. Users must then describe the composition they want, and the model will then generate consistent outputs from multiple angles.

As an example, the startup released a video of a woman maintaining her appearance in different shots and contexts across a variety of lighting conditions.

The release comes less than a year after Runway announced its Gen-3 Alpha video generator. That model extended the length of videos users could produce, but also sparked controversy as reportedly had been trained on thousands of scraped YouTube videos and pirated films.

Share.
Exit mobile version