Just as highway networks may suffer from snarls of traffic, so too may computer networks face congestion. Now a new study finds that many key algorithms designed to control these delays on computer networks may prove deeply unfair, letting some users hog all the bandwidth while others get essentially nothing.

Computers and other devices that send data over the internet break it down into smaller packets and then use special algorithms to decide how fast to send these packets. These congestion-control algorithms aim to discover and exploit all the available network capacity while sharing it with other users on the same network.

Over the past decade, researchers have developed several congestion-control algorithms that seek to achieve high rates of data transmission while minimizing the delays resulting from data waiting in queues in the network. Some of these, such as Google’s BBR algorithm, are now widely used by many websites and applications.

“Extreme unfairness happens even when everybody cooperates, and it is nobody’s fault.”
—Venkat Arun, MIT

However, although hundreds of congestion-control algorithms have been proposed in the last roughly 40 years, “there is no clear winner,” says study lead author Venkat Arun, a computer scientist at MIT. “I was frustrated by how little we knew about where these algorithms would and would not work. This motivated me to create a mathematical model that could make more systematic predictions.”

Unexpectedly, Arun and his colleagues now find many congestion-control algorithms may prove highly unfair. Their new study finds that given the real-world complexity of network paths, there will always be a scenario where a problem known as “starvation” cannot be avoided—where at least one sender on a network receives almost no bandwidth compared to that of other users.

A user’s computer does not know how fast to send data packets because it lacks knowledge about the network, such as how many other senders are on it or the quality of the connection. Sending packets too slowly makes poor use of the available bandwidth. However, sending packets too quickly may overwhelm a network, resulting in packets getting dropped. These packets then need to be sent again, resulting in delays. Delays may also result from packets waiting in queues for a long time.

Congestion-control algorithms rely on packet losses and delays as details to infer congestion and decide how fast to send data. However, packets can get lost and delayed for reasons other than network congestion. For example, data may be held up and then released in a burst with other packets, or a receiver’s acknowledgement that it received packets might get delayed. The researchers called delays that do not result from congestion “jitter.”

Congestion-control algorithms cannot distinguish the difference between delays caused by congestion and jitter. This can lead to problems, as delays caused by jitter are unpredictable. This ambiguity confuses senders, which can make each of them estimate delay differently and send packets at unequal rates. The researchers found this eventually leads to situations where starvation occurs and some users get shut out completely.

In the new study, the researchers analyzed whether every congestion-control algorithm of which they were aware, as well as some new ones they devised, could avoid starvation. The scientists were surprised to find there were always scenarios with each algorithm where some people got all the bandwidth, and at least one person got basically nothing.

“Some users could be experiencing very poor performance, and we didn’t know about it sooner,” Arun says. “Extreme unfairness happens even when everybody cooperates, and it is nobody’s fault.”

The researchers found that all existing congestion-control algorithms that seek to curb delays are what they call “delay-convergent algorithms” that will always suffer from starvation. The fact that this weakness in these widely used algorithms remained unknown for so long is likely due to how empirical testing alone “could attribute poor performance to insufficient network capacity rather than poor algorithmic decision-making,” Arun says.

Although existing approaches toward congestion control may not be able to avoid starvation, the aim now is to develop a new strategy that does, Arun says. “Better algorithms can enable predictable performance at a reduced cost,” he says.

Arun notes that this research may have applications beyond analyzing network congestion. “We are currently using our method of modeling computer systems to reason about other algorithms that allocate resources in computer systems,” he says. “The goal is to help build systems with predictable performance, which is important since we rely on computers for increasingly critical things. For instance, lives could depend on self-driving cars making timely decisions.”

The scientists will detail their findings 24 August at the ACM Special Interest Group on Data Communications (SIGCOMM) conference.

The Conversation (0)

Metamaterials Could Solve One of 6G’s Big Problems

There’s plenty of bandwidth available if we use reconfigurable intelligent surfaces

12 min read
An illustration depicting cellphone users at street level in a city, with wireless signals reaching them via reflecting surfaces.

Ground level in a typical urban canyon, shielded by tall buildings, will be inaccessible to some 6G frequencies. Deft placement of reconfigurable intelligent surfaces [yellow] will enable the signals to pervade these areas.

Chris Philpot

For all the tumultuous revolution in wireless technology over the past several decades, there have been a couple of constants. One is the overcrowding of radio bands, and the other is the move to escape that congestion by exploiting higher and higher frequencies. And today, as engineers roll out 5G and plan for 6G wireless, they find themselves at a crossroads: After years of designing superefficient transmitters and receivers, and of compensating for the signal losses at the end points of a radio channel, they’re beginning to realize that they are approaching the practical limits of transmitter and receiver efficiency. From now on, to get high performance as we go to higher frequencies, we will need to engineer the wireless channel itself. But how can we possibly engineer and control a wireless environment, which is determined by a host of factors, many of them random and therefore unpredictable?

Perhaps the most promising solution, right now, is to use reconfigurable intelligent surfaces. These are planar structures typically ranging in size from about 100 square centimeters to about 5 square meters or more, depending on the frequency and other factors. These surfaces use advanced substances called metamaterials to reflect and refract electromagnetic waves. Thin two-dimensional metamaterials, known as metasurfaces, can be designed to sense the local electromagnetic environment and tune the wave’s key properties, such as its amplitude, phase, and polarization, as the wave is reflected or refracted by the surface. So as the waves fall on such a surface, it can alter the incident waves’ direction so as to strengthen the channel. In fact, these metasurfaces can be programmed to make these changes dynamically, reconfiguring the signal in real time in response to changes in the wireless channel. Think of reconfigurable intelligent surfaces as the next evolution of the repeater concept.

The key feature of RIS that makes it attractive in comparison with these alternatives is its nearly passive nature. The absence of amplifiers to boost the signal means that an RIS node can be powered with just a battery and a small solar panel.

RIS functions like a very sophisticated mirror, whose orientation and curvature can be adjusted in order to focus and redirect a signal in a specific direction. But rather than physically moving or reshaping the mirror, you electronically alter its surface so that it changes key properties of the incoming electromagnetic wave, such as the phase.

That’s what the metamaterials do. This emerging class of materials exhibits properties beyond (from the Greek meta) those of natural materials, such as anomalous reflection or refraction. The materials are fabricated using ordinary metals and electrical insulators, or dielectrics. As an electromagnetic wave impinges on a metamaterial, a predetermined gradient in the material alters the phase and other characteristics of the wave, making it possible to bend the wave front and redirect the beam as desired.

An RIS node is made up of hundreds or thousands of metamaterial elements called unit cells. Each cell consists of metallic and dielectric layers along with one or more switches or other tunable components. A typical structure includes an upper metallic patch with switches, a biasing layer, and a metallic ground layer separated by dielectric substrates. By controlling the biasing—the voltage between the metallic patch and the ground layer—you can switch each unit cell on or off and thus control how each cell alters the phase and other characteristics of an incident wave.

To control the direction of the larger wave reflecting off the entire RIS, you synchronize all the unit cells to create patterns of constructive and destructive interference in the larger reflected waves [ see illustration below]. This interference pattern reforms the incident beam and sends it in a particular direction determined by the pattern. This basic operating principle, by the way, is the same as that of a phased-array radar.

Beamforming by constructive and destructive interference

Erik Vrielink

A reconfigurable intelligent surface comprises an array of unit cells. In each unit cell, a metamaterial alters the phase of an incoming radio wave, so that the resulting waves interfere with one another [above, top]. Precisely controlling the patterns of this constructive and destructive interference allows the reflected wave to be redirected [bottom], improving signal coverage.

An RIS has other useful features. Even without an amplifier, an RIS manages to provide substantial gain—about 30 to 40 decibels relative to isotropic (dBi)—depending on the size of the surface and the frequency. That’s because the gain of an antenna is proportional to the antenna’s aperture area. An RIS has the equivalent of many antenna elements covering a large aperture area, so it has higher gain than a conventional antenna does.

All the many unit cells in an RIS are controlled by a logic chip, such as a field-programmable gate array with a microcontroller, which also stores the many coding sequences needed to dynamically tune the RIS. The controller gives the appropriate instructions to the individual unit cells, setting their state. The most common coding scheme is simple binary coding, in which the controller toggles the switches of each unit cell on and off. The unit-cell switches are usually semiconductor devices, such as PIN diodes or field-effect transistors.

The important factors here are power consumption, speed, and flexibility, with the control circuit usually being one of the most power-hungry parts of an RIS. Reasonably efficient RIS implementations today have a total power consumption of around a few watts to a dozen watts during the switching state of reconfiguration, and much less in the idle state.

Engineers use simulations to decide where to deploy RIS nodes

To deploy RIS nodes in a real-world network, researchers must first answer three questions: How many RIS nodes are needed? Where should they be placed? And how big should the surfaces be? As you might expect, there are complicated calculations and trade-offs.

Engineers can identify the best RIS positions by planning for them when the base station is designed. Or it can be done afterward by identifying, in the coverage map, the areas of poor signal strength. As for the size of the surfaces, that will depend on the frequencies (lower frequencies require larger surfaces) as well as the number of surfaces being deployed.

To optimize the network’s performance, researchers rely on simulations and measurements. At Huawei Sweden, where I work, we’ve had a lot of discussions about the best placement of RIS units in urban environments. We’re using a proprietary platform, called the Coffee Grinder Simulator, to simulate an RIS installation prior to its construction and deployment. We’re partnering with CNRS Research and CentraleSupélec, both in France, among others.

In a recent project, we used simulations to quantify the performance improvement gained when multiple RIS were deployed in a typical urban 5G network. As far as we know, this was the first large-scale, system-level attempt to gauge RIS performance in that setting. We optimized the RIS-augmented wireless coverage through the use of efficient deployment algorithms that we developed. Given the locations of the base stations and the users, the algorithms were designed to help us select the optimal three-dimensional locations and sizes of the RIS nodes from among thousands of possible positions on walls, roofs, corners, and so on. The output of the software is an RIS deployment map that maximizes the number of users able to receive a target signal.

An array of electronic devices sits atop a supporting structure.

An experimental reconfigurable intelligent surface with 2,304 unit cells was tested at Tsinghua University, in Beijing, last year.

Tsinghua University

Of course, the users of special interest are those at the edges of the cell-coverage area, who have the worst signal reception. Our results showed big improvements in coverage and data rates at the cell edges—and also for users with decent signal reception, especially in the millimeter band.

We also investigated how potential RIS hardware trade-offs affect performance. Simply put, every RIS design requires compromises—such as digitizing the responses of each unit cell into binary phases and amplitudes—in order to construct a less complex and cheaper RIS. But it’s important to know whether a design compromise will create additional beams to undesired directions or cause interference to other users. That’s why we studied the impact of network interference due to multiple base stations, reradiated waves by the RIS, and other factors.

Not surprisingly, our simulations confirmed that both larger RIS surfaces and larger numbers of them improved overall performance. But which is preferable? When we factored in the costs of the RIS nodes and the base stations, we found that in general a smaller number of larger RIS nodes, deployed further from a base station and its users to provide coverage to a larger area, was a particularly cost-effective solution.

The size and dimensions of the RIS depend on the operating frequency [see illustration below] . We found that a small number of rectangular RIS nodes, each around 4 meters wide for C-band frequencies (3.5 GHz) and around half a meter wide for millimeter-wave band (28 GHz), was a good compromise, and could boost performance significantly in both bands. This was a pleasant surprise: RIS improved signals not only in the millimeter-wave (5G high) band, where coverage problems can be especially acute, but also in the C band (5G mid).

Marios Poulakis


To extend wireless coverage indoors, researchers in Asia are investigating a really intriguing possibility: covering room windows with transparent RIS nodes. Experiments at NTT Docomo and at Southeast and Nanjing universities, both in China, used smart films or smart glass. The films are fabricated from transparent conductive oxides (such as indium tin oxide), graphene, or silver nanowires and do not noticeably reduce light transmission. When the films are placed on windows, signals coming from outside can be refracted and boosted as they pass into a building, enhancing the coverage inside.

What will it take to make RIS nodes intelligent?

Planning and installing the RIS nodes is only part of the challenge. For an RIS node to work optimally, it needs to have a configuration, moment by moment, that is appropriate for the state of the communication channel in the instant the node is being used. The best configuration requires an accurate and instantaneous estimate of the channel. Technicians can come up with such an estimate by measuring the “channel impulse response” between the base station, the RIS, and the users. This response is measured using pilots, which are reference signals known beforehand by both the transmitter and the receiver. It’s a standard technique in wireless communications. Based on this estimation of the channel, it’s possible to calculate the phase shifts for each unit cell in the RIS.

The current approaches perform these calculations at the base station. However, that requires a huge number of pilots, because every unit cell needs its own phase configuration. There are various ideas for reducing this overhead, but so far none of them are really promising.

The total calculated configuration for all of the unit cells is fed to each RIS node through a wireless control link. So each RIS node needs a wireless receiver to periodically collect the instructions. This of course consumes power, and it also means that the RIS nodes are fully dependent on the base station, with unavoidable—and unaffordable—overhead and the need for continuous control. As a result, the whole system requires a flawless and complex orchestration of base stations and multiple RIS nodes via the wireless-control channels.

We need a better way. Recall that the “I” in RIS stands for intelligent. The word suggests real-time, dynamic control of the surface from within the node itself—the ability to learn, understand, and react to changes. We don’t have that now. Today’s RIS nodes cannot perceive, reason, or respond; they only execute remote orders from the base station. That’s why my colleagues and I at Huawei have started working on a project we call Autonomous RIS (AutoRIS). The goal is to enable the RIS nodes to autonomously control and configure the phase shifts of their unit cells. That will largely eliminate the base-station-based control and the massive signaling that either limit the data-rate gains from using RIS, or require synchronization and additional power consumption at the nodes. The success of AutoRIS might very well help determine whether RIS will ever be deployed commercially on a large scale.

Of course, it’s a rather daunting challenge to integrate into an RIS node the necessary receiving and processing capabilities while keeping the node lightweight and low power. In fact, it will require a huge research effort. For RIS to be commercially competitive, it will have to preserve its low-power nature.

With that in mind, we are now exploring the integration of an ultralow-power AI chip in an RIS, as well as the use of extremely efficient machine-learning models to provide the intelligence. These smart models will be able to produce the output RIS configuration based on the received data about the channel, while at the same time classifying users according to their contracted services and their network operator. Integrating AI into the RIS will also enable other functions, such as dynamically predicting upcoming RIS configurations and grouping users by location or other behavioral characteristics that affect the RIS operation.

Intelligent, autonomous RIS won’t be necessary for all situations. For some areas, a static RIS, with occasional reconfiguration—perhaps a couple of times per day or less—will be entirely adequate. In fact, there will undoubtedly be a range of deployments from static to fully intelligent and autonomous. Success will depend on not just efficiency and high performance but also ease of integration into an existing network.

6G promises to unleash staggering amounts of bandwidth—but only if we can surmount a potentially ruinous range problem.

The real test case for RIS will be 6G. The coming generation of wireless is expected to embrace autonomous networks and smart environments with real-time, flexible, software-defined, and adaptive control. Compared with 5G, 6G is expected to provide much higher data rates, greater coverage, lower latency, more intelligence, and sensing services of much higher accuracy. At the same time, a key driver for 6G is sustainability—we’ll need more energy-efficient solutions to achieve the “net zero” emission targets that many network operators are striving for. RIS fits all of those imperatives.

Start with massive MIMO, which stands for multiple-input multiple-output. This foundational 5G technique uses multiple antennas packed into an array at both the transmitting and receiving ends of wireless channels, to send and receive many signals at once and thus dramatically boost network capacity. However, the desire for higher data rates in 6G will demand even more massive MIMO, which will require many more radio-frequency chains to work and will be power-hungry and costly to operate. An energy-efficient and less costly alternative will be to place multiple low-power RIS nodes between massive MIMO base stations and users as we have described in this article.

The millimeter-wave and subterahertz 6G bands promise to unleash staggering amounts of bandwidth, but only if we can surmount a potentially ruinous range problem without resorting to costly solutions, such as ultradense deployments of base stations or active repeaters. My opinion is that only RIS will be able to make these frequency bands commercially viable at a reasonable cost.

The communications industry is already touting sensing—high-accuracy localization services as well as object detection and posture recognition—as an important possible feature for 6G. Sensing would also enhance performance. For example, highly accurate localization of users will help steer wireless beams efficiently. Sensing could also be offered as a new network service to vertical industries such as smart factories and autonomous driving, where detection of people or cars could be used for mapping an environment; the same capability could be used for surveillance in a home-security system. The large aperture of RIS nodes and their resulting high resolution mean that such applications will be not only possible but probably even cost effective.

And the sky is not the limit. RIS could enable the integration of satellites into 6G networks. Typically, a satellite uses a lot of power and has large antennas to compensate for the long-distance propagation losses and for the modest capabilities of mobile devices on Earth. RIS could play a big role in minimizing those limitations and perhaps even allowing direct communication from satellite to 6G users. Such a scheme could lead to more efficient satellite-integrated 6G networks.

As it transitions into new services and vast new frequency regimes, wireless communications will soon enter a period of great promise and sobering challenges. Many technologies will be needed to usher in this next exciting phase. None will be more essential than reconfigurable intelligent surfaces.

The author wishes to acknowledge the help of Ulrik Imberg in the writing of this article.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}