Is the Metaverse Even Feasible?

Just to make the network work will require new technologies and vast sums of money

3 min read
A digital rendering of three people and one large red robot sitting around a white table

Meta’s marketing shows friends around a virtual table, but experts say this scene is not as simple as it appears.


If you ask Meta, or its peers, whether the metaverse is possible, the answer is confident: Yes—it’s just a matter of time. The challenges are vast, but technology will overcome them. This may be true of many problems facing the metaverse: Better displays, more sensitive sensors, and quicker consumer hardware will prove key. But not all problems can be overcome with improvements to existing technology. The metaverse may find itself bound by technical barriers that aren’t easily scaled by piling dollars against them.

The vision of the metaverse pushed by Meta is a fully simulated “embodied internet” experienced through an avatar. This implies a realistic experience where users can move through space at will and pick up objects with ease. But the metaverse, as it exists today, falls far short. Movement is restricted and objects rarely react as expected, if at all.

Louis Rosenberg, CEO of Unanimous AI and someone with a long history in augmented reality work, says the reason is simple: You’re not really there, and you’re not really moving.

“We humans have bodies,” Rosenberg said in an email. “If we were just a pair of eyes on an adjustable neck, VR headsets would work great. But we do have bodies, and it causes a problem I describe as ‘perceptual inconsistency.’ ”

Meta frequently demos an example of this problem—friends surrounding a virtual table. The company’s press materials depict avatars fluidly moving around a table, standing up and sitting at a moment’s notice, interacting with the table and chairs as if it were a real, physical surface.

“That can’t happen. The table is not there,” says Rosenberg. “In fact, if you tried to pretend to lean on the table, to make your avatar look like that, your hand would go right through it.”

Developers can attempt to fix the problem with collision detection that stops your hand from moving through the table. But remember—the table is not there. If your hand stops in the metaverse, but continues to move in reality, you may feel disoriented. It’s a bit like a prankster yanking a chair from beneath you moments before you sit down.

Meta is working on EEG and ECG biosensors which might let you move in the metaverse with a thought. This could improve range of movement and stop unwanted contact with real-world objects while moving in virtual space. However, even this can’t offer full immersion. The table still does not exist, and you still can’t feel its surface.

Rosenberg believes this will limit the potential of a VR metaverse to “short duration activities” like playing a game or shopping. He sees augmented reality as a more comfortable long-term solution. AR, unlike VR, augments the real world instead of creating a simulation, which sidesteps the problem of perceptional inconsistency. With AR, you're interacting with a table that’s really there.

Figuring out how to translate our physical forms to virtual avatars is one hurdle, but even if that's solved, the metaverse will face another issue. Moving data between users thousands of miles apart with very low latency.

“To be truly immersive, the round trip between user action and simulation reaction must be imperceptible to the user,” Jerry Heinz, a member of the Ball Metaverse Index’s expert council, said in an email. “In some cases, ‘imperceptible’ is less than 15 milliseconds.”

Heinz, formerly the head of Nvidia’s enterprise cloud services, has first-hand experience with this problem. Nvidia’s GeForce Now service lets customers play games in real time on hardware located in a data center. This demands high bandwidth and low latency. According to Heinz, GeForce Now averages about 30 megabits per second down and 80 milliseconds round trip, with only a few dropped frames.

Modern cloud services like GeForce Now handle user load through content delivery networks, which host content in data centers close to users. When you connect to a game via GeForce Now, data is not delivered from a central data center used by all players but instead from the closest data center available.

The metaverse throws a wrench in the works. Users may exist anywhere in the world and the path data travels between users may not be under the platform’s control. To solve this, metaverse platforms need more than scale. They need network infrastructure that spans many clusters of servers working together across multiple data centers.

“The interconnects between clusters and servers would need to change versus the loss affinity they have today,” said Heinz. “To further reduce latency, service providers may well need to offer rendering and compute at their edge, while backhauling state data to central servers.”

The problems of perceptional inconsistency and network infrastructure may be solvable but, even so, they'll require many years of work and huge sums of money. Meta’s Reality Labs lost over US $20 billion in the past three years. That, it seems, is just the tip of the iceberg.

The Conversation (0)

Metamaterials Could Solve One of 6G’s Big Problems

There’s plenty of bandwidth available if we use reconfigurable intelligent surfaces

12 min read
An illustration depicting cellphone users at street level in a city, with wireless signals reaching them via reflecting surfaces.

Ground level in a typical urban canyon, shielded by tall buildings, will be inaccessible to some 6G frequencies. Deft placement of reconfigurable intelligent surfaces [yellow] will enable the signals to pervade these areas.

Chris Philpot

For all the tumultuous revolution in wireless technology over the past several decades, there have been a couple of constants. One is the overcrowding of radio bands, and the other is the move to escape that congestion by exploiting higher and higher frequencies. And today, as engineers roll out 5G and plan for 6G wireless, they find themselves at a crossroads: After years of designing superefficient transmitters and receivers, and of compensating for the signal losses at the end points of a radio channel, they’re beginning to realize that they are approaching the practical limits of transmitter and receiver efficiency. From now on, to get high performance as we go to higher frequencies, we will need to engineer the wireless channel itself. But how can we possibly engineer and control a wireless environment, which is determined by a host of factors, many of them random and therefore unpredictable?

Perhaps the most promising solution, right now, is to use reconfigurable intelligent surfaces. These are planar structures typically ranging in size from about 100 square centimeters to about 5 square meters or more, depending on the frequency and other factors. These surfaces use advanced substances called metamaterials to reflect and refract electromagnetic waves. Thin two-dimensional metamaterials, known as metasurfaces, can be designed to sense the local electromagnetic environment and tune the wave’s key properties, such as its amplitude, phase, and polarization, as the wave is reflected or refracted by the surface. So as the waves fall on such a surface, it can alter the incident waves’ direction so as to strengthen the channel. In fact, these metasurfaces can be programmed to make these changes dynamically, reconfiguring the signal in real time in response to changes in the wireless channel. Think of reconfigurable intelligent surfaces as the next evolution of the repeater concept.

Keep Reading ↓Show less