At SIGGRAPH this week, we’ll finally get to see the massive amount of work being done to create the Multiverse: tens of thousands of developers, thousands of companies, a deep dive into tools like the HTML-like universal scene descriptions, and a genuine sense that we don’t have enough resources yet.
To make the Multiverse real without incurring thousands of hours in programmer time to digitize the world, we need an automated process. At the heart of such a process would have to be an AI that would know how to create a genuine digital twin, including the physical laws that must apply to make it truly realistic.
Look at the last few minutes of the keynote by NVIDIA CEO Jensen Huang at GTC where they demonstrate how a true digital twin works as created by their Omniverse offering. You’ll see that the level of detail to create a genuine digital twin is significant.
Let’s talk about the Omniverse missing link, with the expectation we might see it emerge at SIGGRAPH during or after the NVIDIA keynote:
Making The Metaverse Real Faster
One of the primary goals for many working on creating the Metaverse is to digitize the world. To do this manually would be an undertaking that would be virtually impossible to fully staff. This problem is because as we digitize the world, it is changing. New roads and buildings are going in daily, and those structures both age and change once they are built. There are regular natural and manmade disasters, and the building industry isn’t likely to stop and wait while we digitize it.
This rate of change means as fast as people create the Multiverse, the real objects they are emulating will be changing, which means that people won’t finish this project due to insufficient bandwidth. Still, a large group of AIs working on plans and images would need to be constantly engaged, both creating and assuring the accuracy of the Multiverse both initially and over time.
We’ll need these AIs to not just look at pictures, but be able to read plans and then virtually construct the various structures, altering them constantly to address changes and assuring accuracy as the real-world changes. This innovation will make building approval far more interesting, because engineers will be able to see what the building looks like in both its final form and its final location. Neighbors will also see the building’s impact early in the process, with potentially enough time to raise and get their objections heard. At the same time, changes are digital and comparatively easy to make.
The analytics that surround the effort could be made to test the virtual structure for integrity. You’ll be able to emulate disasters on or near the virtual structure to assure tenants can quickly and safely get out if needed.
Finally, the AI should also analyze the plans and point out physical or artistic problems, so they can be addressed early. One of the high costs of building anything is change orders. If you can get that part sorted before the physical structure is started, the reduction in building costs would likely more than pay for the cost of creating the digital twin, even if a human did that.
At SIGGRAPH, you should get a sense of the massive internet-like effort to create the Multiverse. The keynote by NVIDIA, in particular, will be a showcase of progress, examples, and, as is often the case with them, a few unanticipated elements that will undoubtedly accelerate the creation and adoption of the Metaverse.
But what the industry needs now is an automated tool to take what is real and what is planned and create digital twins surrounded by technology that will also keep this new virtual world in sync with the real one. When done, the impact will be significant on everything from planning and operating cities and sites to creating movies and other simulations that better emulate the real world.
But people can’t get us there. We’ll need AIs that not only can help us create the thing, but keep it synchronized.