
Our Complete AI Video Editing Workflow from Prompt to Final Cut
Mihaly Varga
Founder & AI Creative Director
The most common question we get after sharing a new short film isn't 'which AI tool did you use?' — it's 'how do you actually put it all together?'. This post answers that fully.
Phase 1 — Concept & Storyboard
Every project starts with a one-paragraph concept — a mood, a world, a feeling. From that, we write a loose shot list: 10–20 individual shots with camera direction, subject, and atmosphere notes. We don't script dialogue or narration at this stage, just composition intent.
Phase 2 — Image Generation
For films that rely on photorealistic character consistency, we use Midjourney to establish a 'character sheet' — several reference generations of the same subject at different angles and lighting before we move to video generation. This reference library is what keeps a film feeling coherent across clips.
Phase 3 — Video Generation
We use Runway ML Gen-3 and Kling for video generation, choosing based on the shot type. Runway excels at cinematic camera movement; Kling handles complex subject motion better. We generate 3–5 variations per shot and select the best.
Phase 4 — Assembly & Edit
All clips land in DaVinci Resolve. We cut to a music track first — the rhythm of the edit is set by the audio, not the visuals. Once the cut feels right, we layer in colour grading using our custom LUT built to unify the slightly different looks between AI generation tools.
- 1Import all selected clips into a DaVinci Resolve project
- 2Cut rough assembly to selected music track
- 3Tighten cuts and add transitions where needed
- 4Apply base colour grade with our AI-unification LUT
- 5Per-clip grade adjustments for consistency
- 6Export at 4K H.265 for YouTube, H.264 1080p for social
This full pipeline is covered step-by-step in our Runway ML course, including downloadable project files from a completed short film.