NVIDIA has used AI technology in a variety of applications. From its Volta-based SaturnV supercomputer to self-driving cars, the GPU maker has now extended the technology to a mainstream feature: slow motion videos.
Specifically, a group from Cornell University has been using NVIDIA’s deep learning technology to artificially transform a standard video running at 30fps, and slow it down to 240fps. Additionally, the same technology can also be used to transform manually recorded slow motion video into super slow motion video.
The ability to record slow motion and super slow motion video already exists on smartphones such as the Xperia XZ2 and the Samsung Galaxy S9 series. However, the group writes that manually recording video at high frame rates is impractical, as it requires a large amount of memory and is power-intensive for mobile devices.
In the case of NVIDIA’s AI, it creates slow motion videos by first analysing two different frames of the original video. After the analysis, it begins to predict every frame within the video, and proceeds to slot in intermediary sequences between those frames, thus creating the aforementioned slow motion video.
There are some limitations to the method. As a start, the group needed to train their deep learning machine to handle specific types of footage. This meant that in order for it to slow down a video of a racquet bursting a jelly filled balloon or a ballerina doing a twirl, it was necessary for the system to watch several videos related to the acts.
You can see from the video that the end results are rather impressive, but isn’t entirely accurate and will need to be refined a lot more before it can become a commercially available.