Runway, the generative AI startup that co-created last year’s breakout text-to-image model, Stable Diffusion, has released an AI model, called Gen-1, that can transform existing videos into new ones by applying any style specified by a text prompt or reference image.
In a demo reel posted on its website, Runway shows how its software can turn people on a street into claymation puppets and books stacked on a table into a cityscape at night. Runway hopes that Gen-1 will do to video what Stable Diffusion did for images. “We’ve seen a big explosion in image-generation models,” says Runway’s CEO and cofounder Cristóbal Valenzuela. “I truly believe that 2023 is going to be the year of video.”
Set up in 2018, Runway has been developing AI-powered video-editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Steven Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.
In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup then stepped in to pay the computing costs required to train the model on much more data. In 2022, Stability AI took Stable Diffusion mainstream,transforming it from a research project into a global phenomenon.
But the two companies no longer collaborate. With Getty now taking legal action against Stability AI—claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission—Runway is keen to keep its distance.
Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Make-a-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But, according to Runway’s demo reel at least, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models. (The company says it will post technical details about Gen-1 on its website in the next few days.)
Unlike Meta and Google, Runway has built its model with customers in mind. “This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”
Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.
Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with it. By putting Gen-1 into the hands of creative professionals, Valenzuela hopes that we will soon start to see a similar impact of generative AI on video.
“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.”