Mind-Blowing Ways AI Is Transforming Our Video Edits - from an editor at BearJam
So, I’ve been chatting with our editors to gather everything you need to know - so all you have to do is sit back and read. (And who doesn’t love that?)I’m even thinking of making a reel about this… so watch this space.
Alright, let’s crack on.
AI VISUALS WITH RABIA
“AI tools have already become integrated in the editing software we use, but there are a few extra platforms I like to use, each with their own unique purpose.
For Upscaling and Sharpening Video
We use the same software, regardless of where I am editing, which is Topaz Video AI.
For sharpening, we had a situation where the lens we were using to have a continuous hazy effect in certain shots was too strong for one of the portrait shots and wasn’t discovered until post. I took the raw clip into Topaz Video Ai and fixed it in there and then graded it and brought it back to the edit.
Topaz is also great for upscaling. We use this when we have a low resolution footage, so it looks better in the edit we would upscale it.
BearJam AI Video:
Content Aware Fill
Sometimes for a shot, we can use AIi to generate a frame to cover up objects.
An example of this is when we recorded a 5-person chat for Company of Cooks. Due to the location, we needed to put a light above the table - which you can see in the left image.
In post, we cropped the light out of shot as much as possible, then brought a still into Photoshop and used generative fill to mask the light out. That mask was then brought back into the edit on top of the footage - a seamless patch!
This technique really only works for a static camera shot; for a moving camera, we’d deploy a more intricate but similar technique in After Effects.
Multi-Cam - AutoPod or AI Smart Switch
I tend to only use Multi-Cam tools when doing podcasts.
To quickly explain, Multi-Cam is referring to when you are switching between multiple cameras. An example in a podcast setting: you would typically have 3 cameras, 1 wide and 2 mid shots. The Multi-Cam process switches between these three angles depending on who is talking and the reactions of each person.
I would generally use the Vision Mix as the base and make any changes I want from that. However, if I don’t have a usable vision mix, I would use a Multi-Cam tool. I then treat what it gives me as a base and make changes from there.
In Premiere Pro, I would use the plug-in Auto Pod, but in DaVinci Resolve, I would use the new built-in tool AI Smart Switch. These two tools save me a lot of time and give me a great base I can work from.
AI is a great tool to help with mundane and time-consuming tasks, but it can’t just be used on its own. The work still needs the human spark that makes the work shine.”
I don’t know about you, but my mind is blown. If you’d like to dive deeper, feel free to reach out - and keep an eye on the blog for more...