The February 2023 issue of IEEE Spectrum is here!

Close bar

The Metaverse Could Help Us Better Understand Reality

The killer app for ambitious virtual reality could be our world

3 min read
The planet earth sitting on a mirror with another earth in the reflection on a red background.
Edmon de Haro

Certain neologisms seem to pop up, then disappear, only to return in another guise. William Gibson's award-winning 1984 science fiction classic Neuromancer popularized the word cyberspace, a meaningless portmanteau that went viral and eventually became a shorthand expression describing the totality of the online world.

We're now seeing something similar happen with the word metaverse, coined in Neal Stephenson's 1992 novel Snow Crash where it referred to the successor of our two-dimensional Internet. The word resurfaced a short time later in the product road maps of a hundred failed startups and is returning now as the plaything of Big Tech.


We're hearing that everyone will need access to the metaverse, that this virtual universe will be the place we'll all soon be working and playing. Whether anyone other than a few true believers would be willing to tolerate for more than a few minutes the sweaty, fogged-up insides of the head-mounted display needed to get there remains an open question.

Immersive virtual reality hasn't progressed much in conception or implementation from the systems prototyped three decades ago—certainly not enough to present it as the future of people's daily work environment. It's probably going to remain an awkward place to visit for a long time yet, so there must be a good reason to go.

A fundamental power of virtual reality lies in its capacity to give us insight into processes either too large or too small to be directly observable.

What, then, could such a metaverse be for? Could its purpose be so important that we'd willingly endure both the physical discomfort of wearing a head-mounted display and the often disturbingly unnatural representations of people in these 3D virtual worlds?

The metaverse has always had two faces—one looking within, to our imaginations, the other focusing on the real world. Early efforts in this realm, such as ART+COM's T_Vision, made metaverses that represent the Earth, inspiring projects such as Google Earth, which offer a richer and more complex metaverse of the real. These are prototypes for the many metaverses that are to come because they provide us with enormously useful views of the real world.

A fundamental but largely unrecognized power of virtual reality lies in its capacity to give us insight into processes either too large or too small to be directly observable. We can at a glance view something as big as our planet or as small as a cell, making things we understood only in the most abstract sense become both tangible and actionable.

These "metaversal" powers are of immense value because they allow us to observe and comprehend the nature and consequences of our activities. With such tools to help us see what we're doing—as individuals, as nations, and as a species—we gain the opportunity to learn from our actions. Without them, we're flying blind, manipulating our environment on a global scale, but without proper understanding of the consequences.

So if we technologists are going to build a metaverse, let's start with a mirror world: a high-fidelity reflection of the real world, in all of its richness, complexity, and unpredictability. Encompassing the totality of the world within such a metaverse won't be easy—it will no doubt take our best minds years of work. But along the way we will be learning because as we construct that mirror and gaze into it, our blind spots will be revealed. We can then take what we learn and immediately put it to work protecting the real environment around us. And that's reason enough to undertake a planet-scale construction project in cyberspace.

This article appears in the November 2021 print issue as "Mirror Worlds."

The Conversation (0)
Illustration showing an astronaut performing mechanical repairs to a satellite uses two extra mechanical arms that project from a backpack.

Extra limbs, controlled by wearable electrode patches that read and interpret neural signals from the user, could have innumerable uses, such as assisting on spacewalk missions to repair satellites.

Chris Philpot

What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard.

Such scenarios may seem like science fiction, but recent progress in robotics and neuroscience makes extra robotic limbs conceivable with today’s technology. Our research groups at Imperial College London and the University of Freiburg, in Germany, together with partners in the European project NIMA, are now working to figure out whether such augmentation can be realized in practice to extend human abilities. The main questions we’re tackling involve both neuroscience and neurotechnology: Is the human brain capable of controlling additional body parts as effectively as it controls biological parts? And if so, what neural signals can be used for this control?

Keep Reading ↓Show less
{"imageShortcodeIds":[]}