
7: Final Tests and Defining a Production Pipeline
Share
With all the previous experimentation and insights gathered, I ran a few last tests. This article shares the results of these final tests and outlines the production pipeline I'll be using for my short film.
Further Iterations with New Footage
I experimented with additional footage, tweaking the prompt slightly to see if I could achieve even better results. The focus was on extreme close-ups and varying expressions to test the AI's ability to capture intricate details.




Observations:
- Enhanced Details: The AI captured facial expressions and subtle movements more effectively in close-up shots.
- Background Separation: Leaving the background separate reaffirmed the benefits observed in previous tests.#
- Hair: As with VFX in general, hair could become an issue when masking and generating, I should avoid long hair unless required for the story.
Testing Background Generation
I also experimented with generating backgrounds separately to see how they could be integrated with the rotoscoped characters.
Inpainting the Character Out
Using RunwayML's inpainting feature, I attempted to remove the character from the original footage to create clean background plates.
The inpainting wasn't perfect, it left a few artifacts that the AI tried to interpret, resulting in unintended elements in the background. It could be an issue with specific elements in this example or maybe a problem that can be mitigated with a more detailed job.
Examples of AI-Generated Background Anomalies



Conclusion on Backgrounds:
- Need for Cleaner Methods: A more precise approach could be attempted for the background preparation. Using advanced inpainting tools like Sapphire (which I have yet to explore) might yield better results.
- Alternative Solutions: Considering non-AI methods, such as generating backgrounds using 3D tools like Blender, could offer more control and consistency.
Defining the Production Pipeline
Based on all the testing and insights, here's the proposed pipeline that aligns to my project's creative and budgetary needs.
1. Live-Action Production
- a. Green Screen Filming: Shoot as much of the short film as possible on a green screen, especially close-up shots, to facilitate background separation.
- b. Lighting: Lighting should prioritize full visibility of the actors and avoid any hard shadows (low key) footage.
- c. Depth of Field: We should also aim for as wide a depth of field as possible to keep as much foreground and background in focus. Creative choices can be made after the AI pass.
- d. On-Location Filming: Capture wider shots on location and record additional footage of the environments without actors for background generation.
2. Preprocessing of Footage
Once the footage is edited and the cut is locked, proceed with the following steps:
- a. Colour Balance and Correction: Apply colour balancing and correction to all footage to ensure consistency.
- b. Rotoscoping: Rotoscope any footage not shot on a green screen to separate the subjects from the background.
- c. Lighting Consistency: Ensure that lighting matches between on-site footage and green screen shots. This should ideally be addressed during the shooting phase.
- d. Detail Enhancement (Optional): Preprocessing with TopazAI should only be considered if any of the material has lower quality, a lot of noise or motion blur.
3. AI Processing of Foreground Elements
- a. Iterative Experimentation: Use the AI models to process the rotoscoped footage, iterating to find the optimal prompts, seeds, and settings.
- b. Prioritize Structure Over Colour: Focus on maintaining structural integrity in the outputs, with color consistency as a secondary goal.
4. Background Generation
Options to consider:
- 3D Modeling: Use tools like Blender to create backgrounds that match the style and perspective of the foreground elements.
- Advanced Inpainting: Explore more sophisticated inpainting techniques to reconstruct backgrounds from footage.
- Hand-Drawn or Painted Backgrounds: Collaborate with artists or use digital painting techniques to create backgrounds that complement the AI-generated characters. (This is inspired by "Undone", the Amazon show).
5. Reassembling the Cut
- a. Composite Elements: Combine the AI-processed foregrounds with the generated backgrounds.
- b. Adjust Timing: Ensure that the pacing and timing remain consistent with the original edit.
6. Post-Processing and Artistic Enhancements
- a. Clean Up Lines: Refine the visual quality by cleaning up line work and correcting any AI artifacts.
- b. Add Hand-Drawn Details: Incorporate hand-drawn elements to enhance expressions or add stylistic touches.
- c. Visual Effects: Apply any additional visual effects needed and any other elements produced using other methods.
7. Final Color Correction and Grading
- a. Color Correct: Make final adjustments to ensure color consistency across all scenes.
- b. Color Grade: Apply grading to establish the mood and tone of the film, enhancing the visual storytelling.
Final Thoughts
I'm encouraged by the potential of this pipeline. The AI tools have proven to be powerful, and with careful planning and execution, I believe they can help me create a visually compelling short film.
The next phase involves moving into full production, applying this pipeline to the entire project. While challenges are expected, especially in maintaining consistency and integrating all elements seamlessly, I'm confident that the groundwork laid through this experimentation will come in very handy.
Overall, this process has not only streamlined the production but also opened doors to creative opportunities I hadn't considered. I'm excited to begin the production process and see how these techniques bring my vision to life.
Thank you for following along on this series. In future updates, I'll share progress on the production, discuss any obstacles encountered, and reflect on the effectiveness of the pipeline in practice.