Filmmakers now have access to a charcuterie board of new AI tools. You can fire up ChatGPT and there’s a writing partner immediately by your side. You can use machine learning as an Assistant Editor, and boom, you’ve automated otherwise tedious editing tasks. I mean, for a poor creative like myself, having a robot-friend can really help enhance my low budget productions.
So, in our continual learning journey as filmmakers, my friend and frequent collaborator, Esteban Palladino and I looked at our notes and realized we had two complementary production goals we could align on.
Objectives
1) Test the range of the Arri Amira cameras
These incredible machines have exceptional image quality, color science, and versatility—especially when it comes to the connectivity to all of our studios around the world. But even with these new cameras and tools, the state of video production is currently in an interesting predicament. With the economy being wobbly, budgets are tighter than ever. So, given this current production landscape, how do we innovate if we don’t have the resources we’re used to? Is there a way to create entirely new environments using these cameras?
2) Test Stable Diffusion
Stable Diffusion is an open source AI text-to-image model. You give the robot a text prompt and it will return an image matching the text. So you can type something like “a cat trying to juggle lemons” and get a result. Don’t believe me? Here you go. The robot made this:

You can also assemble a rough sketch of what you want, even a doodle, and with some trial and error it will create a high quality concept pic. Here is a rough sketch of “a door” and what AI produced thereafter.


The Prompt
Our game plan was to write a short film to test the dynamic range, high sensitivity, and color space of the Arri Amira cameras and create a new environment using Stable Diffusion and a green screen workflow.
I wrote a quick thing, walked up to my friends and co-workers Briana and Hasan, and the following hilarity ensued in the form of adult research.
Final Product
What we learned:
Overall:
- Embrace the robots! These new AI tools can be useful, especially if you’re limited on resources, time, budget, and open to exploring new creative options.
- School is cool! Let’s keep learning and doing. Who knows how Ai can help us professionally or personally.
- Briana is a good actress. She should act more. But also, don’t take a ride with Briana, even if she invites you to space..
The Amiras (my learnings):
- I really like the menu systems and the layout of the controls on the Arri. They’re very intuitive compared to Canon or Sony who have layers and layers of submenus where you can easily get lost.
- The Amira’s sensor produces great color which makes skin tones feel natural. And when working with green screens, this is crucial. I’m brown and some cameras skew me as dark-purple.
- I’m no Roger Deakins, so when it comes to cinematography, I know basic fundamentals of light. The Amira is great for low-light situations, especially with my not-so-professional set ups. In this case, they really helped reduce noise in the shadows and help me look like I know what I’m doing.
Stable Diffusion and green screen (Esteban’s learnings):
- AI image generation is a valuable tool for creating mockups and references, blending desired elements with randomness.
- The process often requires multiple attempts and generating 50 to 100 images, but each image is quickly produced depending on hardware.
- AI can enhance image resolution up to 4 times its size, although additional editing may be necessary, offering time-saving benefits when working with low-resolution pictures.
Here is a mix of images AI produced for our video to give you an idea on how many concepts AI creates:





Closing Thoughts
You as the human still do a lot of the work, AI is not miraculous or magical, but it still saves you a lot of time by aiding with image manipulation and it opens up many possibilities by enabling you to create concepts much faster.
Ps. Some nerd stuff: The AI used was Stable Diffusion, using mostly 1.5 models, running on a RTX 3060 video card.
pps. Here are some BTS photos:


Read more learnings on my blog. If you want to collaborate, contact me (via the form). Also check out Esteban’s work!