
5: Getting Started with RunwayML
Share
Now that I have access to Runway's unlimited plan and a set of clips ready for testing, it's time to start with the experimentation. This article will cover how I'm setting up the project, and the choices I'm making in terms of models and settings.
Runway's Gen-3 Models
Runway offers two versions of their Gen-3 model: Gen-3 Alpha and Gen-3 Alpha Turbo. The Alpha Turbo model is less resource-intensive, resulting in faster generation times and lower credit costs. However, it may produce slightly less detailed results compared to the standard Alpha model.
Doing some quick testing, I realized that the results from Gen-3 Alpha were a lot more accurate in closeness to the original material.
When testing the other model, results tended to be highly stylized, and the features of the character were heavily modified.
Preparing the Footage
I'm focusing on two specific clips for this initial testing phase:
- Clip 1: Daytime shot with natural lighting.
- Clip 2: Nighttime shot with artificial lighting.
Both clips have three shots each of 5 seconds in length, aligning with Runway's recommendations for optimized credit usage and processing time. By using these two contrasting scenes, I aim to evaluate how lighting conditions affect the generative AI outputs
Crafting the Prompt
The prompt plays a significant role in guiding the AI to produce the desired style. After some experimentation, I settled on the following prompt to maintain consistency across all tests:
"Rotoscope-style animation, capturing intricate facial expressions, vibrant, true-to-reference colors. Detailed heavy ink outlines and dynamic cel shading create depth, retaining a bold structure throughout."
This prompt aims to replicate a rotoscope animation effect, emphasizing detailed expressions and vibrant colors, with heavy ink outlines and cel shading to add depth and structure.
Understanding Seeds and Structure Transformation
Seeds: In generative models, a seed is a starting point for the random number generator that influences the output. By using a fixed seed, you can reproduce consistent results across different runs. I found that seed 2929140811 yielded pleasing results with the first clip, so I will use it consistently in all subsequent tests to maintain uniformity.
Structure Transformation: This setting controls how much the output adheres to the structure of the input video. It ranges from 0 to 10. I ran quick test with a generic 3D prompt to see the difference in output:
- 0: The output closely follows the input's structure, maintaining shapes and motion.
- 5: A balance between preserving structure and allowing stylistic changes. Although quite interesting, it is not ideal for my use case.
- 10: The output becomes more abstract, diverging significantly from the original structure.
To allow for some experimentation but preserve as much detail and to strive for consistency, I will be generating content using 0, 1 and 2 in this setting.
Initial Observations and Next Steps
So far, the preliminary tests have provided valuable insights into how Runway's settings and my preprocessing choices affect the final output. In the next article, I will share specific examples to illustrate the differences caused by varying the structure transformation settings and using different versions of the footage.
I will also discuss any challenges encountered during the testing process, such as processing times, system demands, and the quality of the outputs in different lighting conditions.
Continue to the next article:
6: Analyzing AI Generated Results