There is no doubt that the concept of the metaverse is being overhyped. People are talking about it as if it exists. But the truth is it is not even well-defined … yet.
Yes, there are a number of small implementations that prove it as a concept, but to truly create the metaverse, we will need a massive internet-like structure, and we do not even have the equivalent of a browser for the metaverse … yet.
Raja Koduri, VP and GM of Intel’s Accelerated Computing Systems and Graphics Group, provides his opinion on where the metaverse is, and it is worth a read. Because, while we would like the metaverse to be where the old “Star Trek: Next Generation” Holodeck was, we are at least a decade away from something we could use like we use the internet and that will come well before the eventual Holodeck experience.
Let us talk about where we are with the metaverse, and the improvements and timeline needed to get where we need to go:
The metaverse today
The metaverse today is not a place to go so much as a collection of technologies surrounding tools like NVIDIA’s Omniverse that can create simulations used to train robots and autonomous cars. It is an easier-to-use and more comprehensive tool set, like what architects have used to create virtual building, but with far more realistic results, including lighting effects, reflections, and a limited application of physics.
For point simulation, the metaverse concept is workable, but it really is just a better simulation platform for point projects today, and nowhere near the full virtual world we expect.
Second half of the decade: Earth-2 and gaming
By the end of the decade, NVIDIA’S Earth-2 project should be viable. This is currently the most aggressive public project in process, and Earth 2 could well become the foundation of a far broader use of the concept. Initially, Earth 2 will be limited by the technology available at the time, but once it is workable, it will be able to predict weather events more accurately and model potential climate change remedies better than the simulations we currently have.
Game engines should be able to start aggressively using the metaverse to create game maps, allowing people to increasingly game in areas that look to be close to their home but are not yet fully complete or accurate.
In addition, by the end of the 2020s, we should have workable 2D treadmills, lighter and higher-resolution headsets and haptic gloves that allow us to interact with virtual objects as if they are physically present. We will also see the emergence (some have already appeared) of virtual reality (VR) chairs, vests, scent generators, and far better directional sound systems that will improve the ability to immerse in these emerging virtual worlds.
2030s
By the end of the 2030s, substantial parts of the world will be fully digitized and explorable. We will see the benefits of working mostly in a virtual environment. We will each have one or more photorealistic avatars, and we will have the beginnings of virtual immortality as these avatars are increasingly allowed to emulate the behaviors of real people. It is doubtful these avatars will truly be sentient, but we may not be able to tell the difference. These avatars will increasingly be able to act, within tightly defined parameters, as our agents, our companions, and some may even be considered co-workers.
For many of us, physical travel may give way to virtual travel, and low-cost agencies that supply virtual travel experiences are likely to become a thing. We still will not be at the Holodeck level, but we will start seeing the light at the end of the tunnel.
2040s
In the 2040s, I expect we will be able to fully interact with the AIs in the metaverse and become unable to tell the difference between a virtualized real person and an AI-driven avatar. This will undoubtedly create some social behavioral problems, as people explore the unlimited freedoms in their personal virtual worlds and forget that those same behaviors are not allowed in the real world. Avatars should be able to hold down jobs in this period, but the social implications of this capability may take years, if not decades, to resolve. This will once again force us to consider what is sentience and life because the digital twins of living things, including people, will become indistinguishable from the real thing in the metaverse.
While we will have been exploring a blend of virtual and real elements going back to the mid-2020s, this blend will become constant by 2040, and surgically implanted interfaces to the metaverse should be practical in the 2040 timeframe. Whether or not religious beliefs prevent implementation in some countries, we will have the capability in this time range to invasively install metaverse interfaces in people.
Wrapping up
We are at the point in our development of the metaverse where we were with the internet before Netscape. While the metaverse will require massive improvements in processing power, network performance, and AI capabilities, it will not truly mature until the late 2030s or ’40s. Until then, it will mostly be used for its increasing simulation capabilities, gaming and, increasingly, for movie creation. By the time we finish this advancement phase, we will have surgically implanted interfaces, AI non-playing characters (NPCs) that present as real people, and a blending of metaverse elements with the real world that will change how we see that world.
However, despite this long timeline, the metaverse is workable today for simulation and to begin to expect and plan for the far deeper metaverse incursion into reality in the future. By the time we reach metaverse maturity in the late 2030s and ’40s, we’d better have decided on related laws, protections, and what we are going to do with intelligent virtual AIs, or things are likely to end very poorly. But for now, the focus should be on building core competencies and understanding what is currently possible with the metaverse and not setting the related expectations unreasonably high.