Friday, October 4, 2024

Adobe, NVIDIA Redefine Reality with Sensei, RTX

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Technology is moving incredibly quickly at the moment. This was very obvious at NVIDIA’s GPU Technology Conference (GTC) this week. One of the most interesting sessions wasn’t really at GTC at all but actually at the Adobe Summit. Adobe Summit was some distance away (Summit was in Las Vegas, GTC in San Jose) and they streamed the joint chat between the two firms’ CEOs

One of the most fascinating statements was made by Jen-Hsun Huang, NVIDIA’s CEO. He offered one of the most succinct explanations of augmented reality (AR) vs. virtual reality (VR) I’ve ever heard. He said that AR is the virtual world tunneling into our world, and VR is us tunneling into the virtual world. Not only is that very accurate, it sounds a ton cooler and simpler than any other explanation I’ve ever heard.

But the big idea from the stage was the blending of Adobe’s video editing artificial intelligence (AI) with NVIDIA’s RTX ray tracing technology. The result could be more than evolutionary — it would be disruptively revolutionary. This could redefine reality as we know it.

NVIDIA RTX: Ray Tracing on Steroids

One of the most consistent goals from past GTC events has been the desire to do real-time photorealistic video rendering. Well NVIDIA finally demonstrated this on stage at GTC this year with the launch of RTX. It presented what looked like a lost clip from the latest Star Wars movie. (Don’t worry — it is working with Industrial Light and Magic.) It looked like any other scene with the actors chatting, the armor and weapons seeming real, and the set looking like another expensive effort from the film. Only thing was, this time, it was rendered real time. No actors (well, only their voices), no sets — all rendered.

Before this, we’ve been able to completely render photorealistic movies for some time, but typically it takes up to 10 hours per frame to render one of these things (that is 300 hours of expensive computer time per second at 30 frames a second). I was looking at some of my favorite green screen films that didn’t do that well (“Sky Captain and the World of Tomorrow” and “John Carter of Mars”) and believe both films would have massively benefitted from the ability to edit the story lines much more cost effectively. This might have made them far more popular movies and, in the case of “Sky Captain,” photorealistic.

But this has implications well beyond better, cheaper, more amazing movies. Being able to render photorealistically in real time would massively improve simulations and modeling, allowing companies to create and refine their visions virtually before ever considering actually building an object. This capability might have helped dramatically change the list of the ugliest cars ever made. (By the way, I was shocked to see the Pontiac Aztek was number five. I had it as number one with a bullet — which should have been used to put that car out of our misery).

Adobe Sensei

Adobe Sensei is Adobe’s unified artificial intelligence (AI) and deep learning platform. Currently, it handles many of the repetitive parts of creating images. It makes it easier to organize, improve and reuse images, massively reducing the amount of time needed to imagine something digitally. Fundamentally, Sensei gives graphics artists super powers, allowing them to create, modify and improve images at incredible speed.

Right now, Sensei spans all three of Adobe’s cloud efforts (Experience Cloud, Creative Cloud and Document Cloud) providing different relevant capabilities in each. In the Creative Cloud, it allows you to dynamically alter faces and expressions without distortion. It helps you find images faster, and it uses face-tracing and optical flow interpolation to smooth out transitions. In the Document Cloud, it helps with PDF conversion, provides a signing capability and helps make PDF files accessible. On the Experience Cloud, it helps with smart tags to organize files, it looks for anomalies and it helps balance and optimize ad spend across channels.

Adobe + NVIDIA = Magic

The old Arthur C. Clarke quote “Any sufficiently advanced technology is indistinguishable from magic” plays here. We are talking about creating, real-time images indistinguishable from reality. This offers a near-term future of far better movies, video games, emulations and digital models. This will transform a number of industries, including medical. One of the NVIDIA showcase examples was the ability to take grainy ultrasound images and transform them into photorealistic picture. (You can even see your unborn baby smile). It holds potential for battlefield simulation, rapid prototyping and, yes, the renewed hope that some of those early movies we loved could be automatically transformed into significantly more realistic.

But it is the combination with Adobe where you see the impact on creation and the emergence of an AI to help with that process. Granted, this will also make fake news videos far easier and more affordable to make, suggesting we’ll need to watch for the bad actors in what otherwise will be an amazing revolution.

In the end, technology power is multiplied through partners and both Adobe and NVIDIA partner well. This suggests there will be some even more amazing things in the future. One final comment, at the event, Jen-Hsun challenged Adobe CEO Shantanu Narayen in front of a cheering audience of Adobe developers to get a tattoo. That’ll teach him to have his conference the same week as NVIDIA’s. Silicon Valley is often a strange and wonderful place.

Photo courtesy of Shutterstock.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles