When we first see someone, our attention instinctively goes to their eyes, mouth, and overall expression. With generative AI playing a growing role in shaping visual narratives, I wanted to explore how it interprets and stylizes these defining human features. 

FACES is an experimental short created with assets generated in Midjourney, paired with a soundscape built from synthesis and generated effects. After the prompt and curation process, clips were further manipulated and graded in post, outside of Midjourney, to establish a visual language. 

This was a great proof of concept project for me. I view it as a quick edit and moodboard exercise, taking the learnings from here to apply to other projects.
The sound design was brought together in Bitwig. I used Dance Diffusion to train and generate models based on a wide variety of Drum & Bass samples and synthesized sounds. After letting the models generate samples at various intervals, they were edited down into a sound effects library I could pull from. 

You may also like

Back to Top