
Colour, sound and shortening in Adobe Premiere. Export time per intermediate video, about 5 hours. Export format for intermediate file = Cineform. Clean equirect video in After Effects, including Noise Reduction.

Stitch into one, full-length euqirectangular video with Mistika VR. Have 6 original, individual shots on 10GBe RAID I am not sure if the project and editing tools allow you do that easily but that is what I would try to implement if I had to organize the workflow in a more efficient way than either (1) just apply NR to the whole large VR video or (2) apply NR to the whole individual shot for every camera and then join those into a VR video and use the required sections. Perhaps you can shorten the process and limit the required disk space by using the following workflow:ġ) take the VR video or individual shots into your editing project and select the required segments of the video that you are going to actually use Ģ) export those selected parts to intermediate files ģ) denoise those intermediate files in a separate project Ĥ) take the denoised intermediates into your editing project and replace the originals with new shorter and cleaner versions. I seriously doubt there is a good solution for it other than preventing it by denoising before doing the projection. That is a big problem for noise reduction algorithms. Perhaps it is no big problem if you only need to take a small part of the long shot, but then there are also other issues related to stretched nature of the image and noise that is modified unevenly by the VR projection. So what you win on not having to use a large temporary storage you then lose on that slower processing of large VR shots.

That large total size creates a bottleneck during processing in the host application and in the memory of the computer as a whole, leading to considerably slower processing speed. Having a, algorithm for an equirectangular format would help big time as the industry is growing and users get more aware of issues like noise.Ī big problem with VR shots is the size of the frame that is several times larger than the size of each individual shot produced by an individual camera. So there's a massive overhead that eats up storage space and time for de-noising footage that is not going to be used in the end. We can only see that in the final stitched file. Also, we may only need 1 minute of those 4:26 minutes or only 30 seconds are usable. Instead, we have to optimise the de-noise setting for each shot. So a batch encoding process that uses the same preset is simply not possible. In a daylight shot with 'standard' recording settings, the noise will be much less than in 'ilog' mode during sunset. On the other hand, we never know how bad the noise is in each shot because the conditions vary and we change settings all the time. Even a Synology RAID with 10GBe connection is going to struggle with that amount of data.
Video denoise software#
Also, these 6 de-noised ProRes files (all 3840x2880) would need to be loaded into the stitching software (e.g. On our last shoot, we filmed 50 shots, so that would amount to about 10TB of de-noised original footage. So for a single shot, we would get about 200-250 GB of a near-lossless file.

If we have to denoise each one of these files into say ProRes, the file size is about 10x that. So that comes to about 23GB for all files.
Video denoise pro#
In our case, with an Insta360 Pro 2 we record 6 of those.
Video denoise mp4#
Let's say that the original file is a 4:26 minutes H264 MP4 with 3.9GB file size. On the one hand side, we as creators would need to create a near-lossless export version of each and every lens/sensor, which would result in a HUGE storage increase.

The two main problems with de-noising VR footage in the original rectangular format is two-fold.
