03 - Character Driven Story & Dialogue
is a cinematic AI experiment exploring how generative tools can be used to build tension, character, and atmosphere over time, rather than relying on isolated visuals.
The focus of this project was to treat AI video as a storytelling environment, not just a visual generator. Every sequence was approached with intent. Who holds power in the frame, where the eye is drawn, and how silence, pacing, and proximity shape the dynamic between characters.
This became an exploration of restraint as much as capability. I experimented with controlled camera movement, lingering close-ups, and minimal dialogue to create a slow-burn intensity between Ethan and Rosa. Instead of fast action, the emphasis was on presence. Eye contact. Micro-movements. The kind of tension that builds before anything is said or done.
The goal was to push AI tools toward character-driven storytelling, where mood, rhythm, and emotional subtext carry as much weight as the visuals themselves.


Final Video (Part 1)
Final Video (Part 2)
Challenge
The primary goal of this experiment was to explore whether AI tools could support a short narrative sequence with dialogue, rather than purely visual or atmospheric clips.
This project focused on building a small action scene inspired by dark fantasy and vampire fiction, testing how AI tools could be used together to move from story concept through to a finished short film.
This included exploring how traditional filmmaking steps such as storyboarding, shot composition, character performance, and sound design translate into an AI-assisted workflow.
What I tested
-
Introducing scripted dialogue into an AI-generated sequence and integrating voice performance into the edit
-
Building a hybrid creative pipeline combining concept art, video generation, voice synthesis, and sound design
-
Using AI image generation to storyboard and design key shots before committing to video generation
-
Experimenting with camera framing and character blocking to create clearer visual storytelling in action scenes
-
Creating consistent characters across multiple scenes using character reference imagery
Tools Used
Midjourney - Character base concpets and key scene environments
Grok - Prompt experimentation and early composition testing before generating final video clips
Kling 3.0 - Video generation used to convert key images and character references into animated scenes
Eleven Labs - Voice synthesis and sound design, including transforming recorded dialogue into character voices and music
Workflow
To be added
Key Learnings
One of the biggest takeaways from this project was that AI video still benefits from a very traditional creative process. The strongest results came when the workflow followed a familiar production pipeline. Story first, then visual concepting, followed by shot composition, motion generation, and finally sound design. Treating AI tools as part of a structured filmmaking process produced far more consistent and usable results than trying to generate finished scenes in a single step.
Another key learning was how important pre-visualisation and iteration are when working with generative tools. Using image generation to establish characters, environments, and shot compositions before committing to video generation helped significantly reduce wasted attempts. It also made it easier to control character placement, camera framing, and action beats across multiple clips. In practice, this meant approaching AI video less like a one-click solution and more like directing a scene. Testing ideas, refining prompts, and gradually building the final sequence layer by layer.
Finally, introducing dialogue and voice performance added a new level of complexity but also significantly improved the sense of narrative. Recording the dialogue first and then transforming it using voice synthesis allowed the pacing and tone of the scene to feel more intentional. It reinforced the idea that while AI can generate visuals quickly, the strongest results still come from combining those tools with human direction, timing, and storytelling instincts.
Outcome
While the experiment highlighted some of the current limitations of AI video generation. particularly around precise character movement and interaction. it also showed how those limitations can be managed through careful planning, reference imagery, and iterative prompting. The finished piece ultimately served as a proof of concept for a hybrid creative workflow where AI tools support the role of a director, enabling rapid experimentation with visual storytelling, shot composition, and sound design.
