Some events I will always remember because they were both significant and industry changing: the Windows 95 launch, the initial iPod launch event, my first use of Microsoft HoloLens, the launch of AMD’s Threadripper (I just love that name) and this week the launch of the NVIDIA RTX platform.
In some ways this RTX workstation graphics launch could eventually eclipse the others because it sets a foundation for changing reality and dramatically alters our perception of the world around us.
Whether we are talking CGI or just Photoshop, the ability to create real-looking images that are based mostly on imagination quickly and inexpensively has been kind of a Holy Grail of imaging. Animators and graphic artists now massively outnumber actors in many major movies, and it is often amazing to watch the credits to see the huge number of folks focused on reconstructing reality to make the movie work.
One enormous challenge in this effort has been light. The way light bounces off objects, creates shadows and alters the appearance of an object is what often tells our brain that the object is real and not artificial. Handling the complexity of light refraction has been a bit of a nightmare for those trying to create these realistic images.
Way back in 1979, a guy by the name of Turner Whitted created a concept called multi-bounce recursive ray tracing. The problem was it was massively resource intensive. He was able to create a low-resolution 512x512 image using a $1.4M midrange computer and 1.2 hours of computer time. Yes, it proved the concept, but the picture’s quality and cost weren’t acceptable at any reasonable production scale. It was believed that it would take one Cray Supercomputer for every pixel in an image to provide a truly realistic real-time image.
But technology changed, and a few months ago NVIDIA demonstrated it could rapidly create photo-realistic images using its new DGX Station (which costs around $80K) s in a reasonable amount of time. This week NVIDIA announced its RTX video card line costing between $2.3 and $10K (depending on version) which could do the job far more quickly.
Many of us thought this level of performance was at least a decade out. This launch is potentially extremely disruptive for the workstation industry focused on rendering.
With Intel architecture, annual performance improvement, at least for the last decade or so, has generally been under 10 percent. Users don’t really notice performance improvements under 20 percent, so that typically means workstations, which often have a direct connection (in a rendering environment) to the related firm’s bottom line, get updated on a two- to three-year cadence. These aren’t inexpensive products, but they have such a huge impact on productivity that once you get higher than 20 percent improvement you can justify the replacement.
In the case of RTX, we are talking a 6x performance improvement (according to NVIDIA) over the prior generation of NVIDIA professional graphics cards. This is an unprecedented performance improvement, and I’m wondering what one of these cards would do on top of AMD’s Threadripper CPU platform for maximum impact.
The Revolutionary AI Component
One of the fascinating, and I think underplayed, parts of this new card line is an artificial intelligence (AI) component which can be trained to up-convert images. They first render in low resolution and then the AI takes over and converts the image in real time to 4K or 8K. This conversion capability can be applied to most any low-resolution image. The AI learns how to interpolate the needed extra pixels and then reimages the picture or frame to create a far higher resolution result. Interestingly, it can do the same thing with frames in a movie to take a regular speed GoPro-like video file and convert it into the kind of high-speed video file that would typically require a $140K high-speed camera.
This all could be applied to old sports footage that was taken before high-speed cameras were available, or to old TV shows and movies to bring them up to current standards and turn long-languishing video libraries into viable content for movie services like Netflix and Amazon Prime. This alone is a multi-billion-dollar opportunity.
Another thing I expect to become affordable is the ability to put your own family members into certain movies. You could give your son a Robin Hood movie with his face on Robin Hood, for instance, and it would be photorealistic. Or your daughter could be the face of Tomb Raider, and she could be cut into scenes in video games. Speaking of video games, old titles could be far more easily updated for current graphics. Granted, game play would be unchanged, but the screen image would be vastly more pleasing on current TVs and monitors.
This powerful ability to, relatively inexpensively, create and modify images in photo-realistic ways will, I believe, fundamentally change the TV and movie industry. We should see increased interest in old shows and movies as they are updated to new digital standards and movies created from scratch which are both less expensive to create and more realistic to watch. Of course, it won’t fix issues with the scripts and editing (two areas that could also use some AI help), but the quality and amount of video content should increase substantially as a result of this.
For those focused on editing or creating digital visual content, the RTX line from NVIDIA Is a game changer. And it is really only the tip of the iceberg as this is the first generation. Makes you wonder what generation number three will be like, doesn’t it?
Photo courtesy of Shutterstock.