Think by Making: Pitching a Generative Film

argodesign
4 min readSep 1, 2023

By Luther Himes IV

An entirely generative mood reel (music and video)

I learn by making. The best way for me to absorb something is to make something, so recently I set out to devise a project leveraging new generative AI tools against a task conceivably practical to my experience. I’ve made storyboards and style frames and even UIs with generative tools, but nothing as yet in motion. After some deliberation, I settled on creating a Mood Reel.

Essentially aesthetic tone poems, Mood Reels are usually made up of select clips and sounds from existing movies or TV shows to accompany scripts or decks when pitching cinematic narratives — allowing a glimpse of what the end result might feel like. I can say firsthand, having cut this reel for a show I was developing, that the process is meticulous and often arduous. Would it be possible, or even easier, to make a viable Mood Reel entirely through generative AI?

Generating a Process

First, I needed a story. I felt it would be pretty tone-deaf to use ChatGPT for concepts or copy (I’ve always found its “creative” writing dull and trite anyway), so I dipped into my well of “someday I’ll have the time” ideas. I settled on a half-baked novel I’ve kicked around for years called The Concentric Circle for which I happen to have some interesting story points to explore and a decent moodboard collected on Pinterest.

Pinterest Mood Board

Through much experimental finagling, I devised a laborious pipeline to generate unique video clips of the minimum style and substance I needed. After using Midjourney’s /describe command to analyze mood board images from Pinterest and build a repertoire of arcane key phrases, I fed those phrases back into MidJourney to generate almost every scene (direct prompt-to-video was mostly disappointing). Based on each service’s strengths, I then processed the images with either Pika Labs or Runway’s Gen-2, both text-to-video tools with image-to-video functionality. This was the fun part; watching, like Dr. Frankenstein, as your creations come to life — will they be angels, or will they be monsters? (They were mostly the latter.)

I opted for Meta’s brand new text-to-music tool, MusicGen, to create a score. It’s slow to use, several minutes for each 15-second audio clip, and my 35 or so results mostly ranged from kind-of-okay to sheer cacophony. With a bare-bones description, I eventually landed on a clip that would be workable and tonally appropriate. In Adobe Premiere Pro I laboriously re-cut the music sample from a meandering 15 seconds to a 30-second arc and began laying in video clips.

MusicGen, a text-to-music generator

A Worthwhile Endeavor?

The “couple days” deadline I’d given myself ballooned into several, partially due to the learning curve and finding ways to generate better video results, but also because of the sheer volume of content that landed on the cutting room floor. Out of roughly 500 Midjourney images, I created about 100 clips between Pika and Gen-2 with 28 finding their way to the final cut. Sometimes working with generative AI can feel like herding cats: the more you try to steer, the further you get from your destination. In this way, I discovered it might be exactly the wrong tool for this kind of project.

Over 500 images generated in MidJourney

Though frustrating at times, the process was addictive. Every step felt like a casino dice roll: will this clip be awesome? will it just be a static image? will it be nonsense smudges? will it be cool nonsense smudges? But the trudge was ultimately a success. I did make a Mood Reel, and it’s OK, but more importantly, I learned a lot. I learned about process, patience, and possibilities — the seemingly infinite possibilities the new-new generative AI technologies suggest are just on the horizon.

And I learned, truly, that everything is going to change. Even how we make movies.

A motion designer with over 15 years of experience bringing stories and ideas to life, Luther Himes IV makes storytelling the heart of every project. Weaving a background in graphic design and illustration into marketing, video, and motion design, Luther has developed a portfolio of work spanning mediums including products, television, and film. He’s crafted award-winning work for clients like Apple TV+, Magic Leap, HBO, T-Mobile, and Patrón Tequila.

--

--

argodesign

We are a product design firm. We love design – for the technology, for the simple joy of craft, and ultimately for the experiences we create.