NVIDIA NGX is a new deep learning-powered technology stack that includes AI-based functions that speed up and improve graphics, photo imaging, and video processing in applications directly. Tensor Cores are used to maximize the efficiency of NVIDIA NGX components, and an RTX-enabled GPU is required. The SDK for NVIDIA NGX makes it simple for developers and researchers to try out new AI-based features in pre-release form. 

NGX Components

There are three main components in NVIDIA NGX. RTX is the real-time ray tracing API for games, Quadro GV100 has Tensor Cores to power NGX DLSS, and NVIDIA NGX Neural Graphics Framework for deep learning-enhanced rendering, image processing, video processing, and AI upscaling.

 

AI Slow-Mo

 

This is not an editing feature, but rather a video-editing tool. The program analyzes frames for features and objects and calculates object and camera movement to generate new frames of video between existing ones. It can also identify components of video shot by professional or amateur camera operators. NVIDIA AI Slow-Mo is powered by the Turing Tensor Cores to drastically speed up super-resolution frame synthesis for slow-motion videos.

Transform High Resolution To Real-Time

With Transform, you can input a high-resolution (typically 4K) video sequence and then transform it into real-time on an RTX GPU using AI denoising. It's designed for streamers who want to share their experiences with higher-quality graphics where available. This version will allow you to take advantage of the Quadro GV100's Tensor Cores to do AI denoising at 8K full resolution in real-time, but this use case will not be supported until later in Q1 2019.

Computational Video

NVIDIA Computational Video leverages the Turing Tensor Cores to accelerate video analytics in hardware. With NVIDIA NGX, you can apply deep learning-based computer vision techniques like semantic segmentation and optical flow directly in video footage before passing it on for encoding, transcoding, or other tasks in the content workflow. It enables applications to use AI both when ingesting footage (analysis), and when processing that footage (synthesis). You can also use Compute Unified Device Architecture (CUDA) APIs built into NVIDIA NGX Neural Graphics Framework to accelerate computer vision tasks. To do this, you'll need at least an RTX 2080 GPU with at least 6GB of memory.

NGX Architecture

 

The NVIDIA NGX architecture is composed of its four main components, including the Turing Tensor Cores, an NVDLA neural-network accelerator (NNAA), a video engine, and an image processing engine.

 

NGX DLSS: The NVIDIA NGX Neural Graphics Framework introduces deep learning-based denoising for accelerated rendering. It uses deep learning to intelligently restore details in areas where traditional compute methods fall short. This version supports both 8K resolution with RTX 2080 Ti, as well as 4K resolution with other GPUs.

 

AI Display Up-Scaling: With this feature, you can take advantage of AI upscaling on Turing GPUs to create new levels of detail approximation that are sharp enough to replace previous upsampling methods. The SDK for this feature will be released in Q1 2019, but it will mainly target AI upscaling of ray-traced media. Applications using this are Photoshop, DxO FilmPack 5, and Natron.

 

AI Upscaling: NVIDIA NGX Neural Graphics Framework allows you to intelligently upscale your footage to generate high-quality results on Turing GPUs. You can use it on both 8K resolution with RTX 2080 Ti, as well as 4K resolution with other GPUs.

Conclusion

 

Deep Learning-based real-time ray tracing is already here today with Quadro GV100, to accelerate AI denoising at 8K full resolution in real-time. It's designed for streamers who want to share their experiences with higher-quality graphics where available.

 

AI Slow-Mo is a video analysis tool that uses deep learning to screen footage for features and objects. The program calculates object and camera movement speeds to generate new frames of video between existing ones.