Learn advanced architecture and design tools via workshops at PAACADEMY! 🚀

Runway released an AI model that can transform existing videos into new ones

How exciting is that!? Runway Research unveiled a new generative AI model that can transform existing videos into new ones. Also, Runway's new model can apply any style by text to prompt or reference images.

📝 In this article:

Become A Digital Member

Subscribe only for €3.99 per month.
Cancel anytime!

runway 1

Runway Research unveiled a new generative AI model that can transform existing videos into new ones. Also, Runway’s new model can apply any style by text to prompt or reference images.

Runway blows up with an IG video that shows how images turn video by using Gen-1. The first-generation software, Gen-1, is currently accessible to a select group of invited users through Runway’s website and is slated to become available to all those on the waitlist within a few weeks. The software runs on the cloud.

Actually, after quick research we found out, Runway has been developing AI-powered video-editing software since 2018. Its tools are used by TikTokers and YouTubers, as well as major film and television studios. For example, the Late Show with Stephen Colbert’s producers used Runway software to edit the show’s graphics, while the visual effects crew for the smash film Everything Everywhere All at Once used the company’s technology to help create key scenes.

When we come to today’s popularity of AI tools, Runway worked with experts at the University of Munich to create the first iteration of Stable Diffusion in 2021. Stability AI, a UK-based startup, subsequently came in to cover the computational costs associated with training the model on far larger amounts of data. Stability AI mainstreamed Stable Diffusion in 2022, changing it from a research project into a global phenomenon.

“We’ve seen a big explosion in image-generation models,” says Runway CEO and cofounder Cristóbal Valenzuela. “I truly believe that 2023 is going to be the year of video.” Also, he said according to MIT Technology Review, “We’re really close to having full feature films being generated,” and added, “We’re close to a place where most of the content you’ll see online will be generated.”

And also last month an article published on “Structure and Content-Guided Video Synthesis with Diffusion Models” presents a structure and content-guided video diffusion model that edits videos based on visual or textual descriptions of the desired output.

Share with a friend:

Courses:

Learn about parametric and computational from the online courses at the PAACADEMY:

Leave a Comment

Your email address will not be published. Required fields are marked *

Upcoming workshop:

Upcoming workshop:

Become A Digital Member

Subscribe only for €3.99 per month. Cancel anytime!

Weekly Newsletter in Your Inbox

Explore More

Subscribe to our weekly newsletter