On Wednesday, in the former San Francisco church that now serves as the headquarters of the Internet Archive, pioneers of the Internet and the World Wide Web joined together to call for a new kind of Web—a decentralized Web. It was a call for change, a call for action, and a call to develop technology that would “lock the Web open.”

And in the audience were the developers and entrepreneurs and thinkers who are going to try to answer that call. These men and women (because the next Web will have mothers as well as fathers), many sporting dreadlocks or tattoos, grew up with the Internet and love the Web, but believe it can be better and are determined to make it so.

This meeting, the Decentralized Web Summit, was part of a 3-day event organized by Brewster Kahle, founder of the Internet Archive, and sponsored by the Internet Archive, the Ford Foundation, Google, Mozilla, and others. It was as much a revival meeting as a tech conference, a feeling enhanced by the rows of pews that made up the seating. There was a lot of fan-boying and fan-girling going on, as the tech leaders of tomorrow buzzed about how they might get this or that luminary to sign their laptops. (Had there been printed programs—there were not—I’m guessing the rush for autographs would have been intense.)

Today’s Web has a number of problems, the attendees agreed; the most obvious being the kind of surveillance uncovered by Edward Snowden’s revelations and the ability to block access, like China’s Great Firewall.  

“That utopian leveling of society, the reinvention of systems of debate...what happened to that?”

Tim Berners-Lee, who founded the Web and is now director of the World Wide Web Consortium, pointed out how far it has strayed from the original dreams for the technology. “That utopian leveling of society, the reinvention of the systems of debate and government—what happened to that?” he asked. “We hoped everyone would be making their own web sites—turns out people are afraid to.”

But even the basic things people want to do aren’t possible, because instead of being a true, interconnected web, it has become a collection of silos. “People have their friends on Facebook and some photos on Flickr and their colleagues on LinkedIn. All they want to do is share the photos with the colleagues and the friends—and they can’t. Which is really stupid. You either have to tell Flickr about your Facebook friends, or move your photos to Facebook and LinkedIn separately, or build and run a third application to build a bridge between the two.”

“I’m frustrated on behalf of everybody using the Web at the moment”

He also criticized the model of trading privacy for free access to things on the Internet, and said it doesn’t have to be so. “The deal the consumer makes is a myth,” he said. “It is a myth that it has to be, it is a myth that everybody is happy with it, it is a myth that it is optimal” for anybody, the consumers or the marketing machine.

“I’m frustrated on behalf of everybody using the Web at the moment,” said Berners-Lee. “But excited that we can really decentralize the web, and that we have the group of people here” who can do it.

Berners-Lee and other speakers at the event pointed out a key problem of the Web today is its ephemeral nature, only partly compensated for by Kahle’s Wayback Machine, an effort he himself called a kluge. The fact that web pages “blink on and offline,” when businesses close or web sites move, breaking hyperlinks, means the web is not reliable.

Another major problem is the sketchy privacy controls that leave users unsure about who and what is monitoring their activity and data.

“Code is law. The way we code the Web is how we live our lives online”

Still, we’re all using the Web because, Kahle acknowledged, “It is fun. It’s a jungle out there, but it’s a fun jungle to go play in.”

It can be a lot better, however. “We can go for a trifecta, make it reliable, private, and still fun,” he said, and, “extra credit if we can make it that people can make money by publishing without going through a third party.”

This all must be “baked into the code,” he insisted.

 “Code is law,” said Kahle. “The way we code the Web is how we live our lives online.” We can bake privacy in, said Kahle, we can bake the first amendment in, and we can bake openness in.

Cory Doctorow, author and special advisor at the Electronic Frontier Foundation, put it a different way: “When you go on a diet,” he said, “throw away all your Oreos.”

“The reason the web ceases to be decentralized,” Doctorow said, “is that there are a lot of short-term gains to centralizing things. The Web is centralized today because people like you make compromises.”

Most of the technology to fix the web, the speakers agreed, already exists; it just must be identified and put into a workable, unified system.

imgVint CerfPhoto: Brad Shirakawa/Internet Archive

Vint Cerf, known as the father of the Internet and now at Google, tossed out a rapid-fire list of suggestions about where to look for this technology. Among these, he asked the developers to:

  • Consider the way Google Docs works with its ability to propagate changes to multiple editors in real time. 
  • Think about some sort of publish/subscribe system, in which a web-page creator can regularly hit a publish command that makes it available for archiving, and various web archives can subscribe to receive updates.
  • Think about creating an archive of software as well, that perhaps may have to include emulations of defunct hardware and operating systems to make the Web always backwards compatible.
  • Figure out a way to authenticate Web sites, so only true versions go down in history, and consider how powerful that tool would be for a society interested in verifying financial transactions or intellectual property.
  • Figure out how to protect copyright for a time, but then automatically unlock those protections when copyright expires.

Lee also had some advice:

  • Change the naming system, and stop thinking of the URL as a location—it’s a name, a format he picked to look like a Unix file name simply because people were comfortable with that.
  • Consider a dot-archive, you get a name there forever, and anything you put there stays, even after you die.
  • Surface the data behind web pages, maybe each page needs two ways of looking at it, the plain piece that you can view with a standard browser, and the data layer that you can explore with more powerful tools.

imgBrewster KahlePhoto: Brad Shirakawa/Internet Archive

And Kahle added to the list:  

  • Consider the Amazon cloud, “It is a decentralized system that has some great features,” albeit being under the control of one owner.
  • Look at JavaScript; it can live in a distributed computing platform and could be the operating system for the new web.
  • Think about blockchain and Bitcoin as key components of the next web.
  • Don’t forget about public key encryption; it was illegal to distribute when Lee conceived the first Web, but today it has a role to play.
  • Don’t discount Wordpress, it has been embraced by large numbers of people, perhaps the new web should have a decentralized Wordpress type of service.

imgCory DoctorowPhoto: Brad Shirakawa/Internet Archive

EFF’s Doctorow said that, in addition to the right technology, the new Web must be based on some key moral principles that, like the U.S. Constitution, will prevent “our wise leaders of tomorrow” from being “pressured into making compromises.” He suggested two:

  • Computers obey their owners. That is, when a computer receives conflicting instructions from its owner and a remote party, the owner always wins.
  • True facts about security vulnerabilities are always legal to talk about. That is, though there can be codes of conduct about responsible disclosure of bugs, the state should never control this.

Was this the day the Web started to change? Will the Decentralized Web Summit be looked back on like, the first Hackers’ conference or the Mother of All Demos? Only history will tell, but just in case, the organizers gathered the group together for a big photograph. And if the Web becomes decentralized and permanently archived, this picture will never disappear.

imgWeb pioneers old and new kick off a movement to decentralize the Web.Photo: Brad Shirakawa/Internet Archive

What comes next is unclear, but the attendees appeared determined to figure it out. Said Kahle, “Do we need VC funding? There are foundations that could get us over the hump, is that what is needed? Or would money hinder rather than help? Should we have a series of conferences? What about prizes? What do we do now?”

Those sounded like questions, but really, they were a call to action, to the charged-up attendees to go out and do it.

Concluded Kahle. “Let’s go build the decentralized web!”

The Conversation (0)

Metamaterials Could Solve One of 6G’s Big Problems

There’s plenty of bandwidth available if we use reconfigurable intelligent surfaces

12 min read
An illustration depicting cellphone users at street level in a city, with wireless signals reaching them via reflecting surfaces.

Ground level in a typical urban canyon, shielded by tall buildings, will be inaccessible to some 6G frequencies. Deft placement of reconfigurable intelligent surfaces [yellow] will enable the signals to pervade these areas.

Chris Philpot

For all the tumultuous revolution in wireless technology over the past several decades, there have been a couple of constants. One is the overcrowding of radio bands, and the other is the move to escape that congestion by exploiting higher and higher frequencies. And today, as engineers roll out 5G and plan for 6G wireless, they find themselves at a crossroads: After years of designing superefficient transmitters and receivers, and of compensating for the signal losses at the end points of a radio channel, they’re beginning to realize that they are approaching the practical limits of transmitter and receiver efficiency. From now on, to get high performance as we go to higher frequencies, we will need to engineer the wireless channel itself. But how can we possibly engineer and control a wireless environment, which is determined by a host of factors, many of them random and therefore unpredictable?

Perhaps the most promising solution, right now, is to use reconfigurable intelligent surfaces. These are planar structures typically ranging in size from about 100 square centimeters to about 5 square meters or more, depending on the frequency and other factors. These surfaces use advanced substances called metamaterials to reflect and refract electromagnetic waves. Thin two-dimensional metamaterials, known as metasurfaces, can be designed to sense the local electromagnetic environment and tune the wave’s key properties, such as its amplitude, phase, and polarization, as the wave is reflected or refracted by the surface. So as the waves fall on such a surface, it can alter the incident waves’ direction so as to strengthen the channel. In fact, these metasurfaces can be programmed to make these changes dynamically, reconfiguring the signal in real time in response to changes in the wireless channel. Think of reconfigurable intelligent surfaces as the next evolution of the repeater concept.

The key feature of RIS that makes it attractive in comparison with these alternatives is its nearly passive nature. The absence of amplifiers to boost the signal means that an RIS node can be powered with just a battery and a small solar panel.

RIS functions like a very sophisticated mirror, whose orientation and curvature can be adjusted in order to focus and redirect a signal in a specific direction. But rather than physically moving or reshaping the mirror, you electronically alter its surface so that it changes key properties of the incoming electromagnetic wave, such as the phase.

That’s what the metamaterials do. This emerging class of materials exhibits properties beyond (from the Greek meta) those of natural materials, such as anomalous reflection or refraction. The materials are fabricated using ordinary metals and electrical insulators, or dielectrics. As an electromagnetic wave impinges on a metamaterial, a predetermined gradient in the material alters the phase and other characteristics of the wave, making it possible to bend the wave front and redirect the beam as desired.

An RIS node is made up of hundreds or thousands of metamaterial elements called unit cells. Each cell consists of metallic and dielectric layers along with one or more switches or other tunable components. A typical structure includes an upper metallic patch with switches, a biasing layer, and a metallic ground layer separated by dielectric substrates. By controlling the biasing—the voltage between the metallic patch and the ground layer—you can switch each unit cell on or off and thus control how each cell alters the phase and other characteristics of an incident wave.

To control the direction of the larger wave reflecting off the entire RIS, you synchronize all the unit cells to create patterns of constructive and destructive interference in the larger reflected waves [ see illustration below]. This interference pattern reforms the incident beam and sends it in a particular direction determined by the pattern. This basic operating principle, by the way, is the same as that of a phased-array radar.

Beamforming by constructive and destructive interference

Erik Vrielink

A reconfigurable intelligent surface comprises an array of unit cells. In each unit cell, a metamaterial alters the phase of an incoming radio wave, so that the resulting waves interfere with one another [above, top]. Precisely controlling the patterns of this constructive and destructive interference allows the reflected wave to be redirected [bottom], improving signal coverage.

An RIS has other useful features. Even without an amplifier, an RIS manages to provide substantial gain—about 30 to 40 decibels relative to isotropic (dBi)—depending on the size of the surface and the frequency. That’s because the gain of an antenna is proportional to the antenna’s aperture area. An RIS has the equivalent of many antenna elements covering a large aperture area, so it has higher gain than a conventional antenna does.

All the many unit cells in an RIS are controlled by a logic chip, such as a field-programmable gate array with a microcontroller, which also stores the many coding sequences needed to dynamically tune the RIS. The controller gives the appropriate instructions to the individual unit cells, setting their state. The most common coding scheme is simple binary coding, in which the controller toggles the switches of each unit cell on and off. The unit-cell switches are usually semiconductor devices, such as PIN diodes or field-effect transistors.

The important factors here are power consumption, speed, and flexibility, with the control circuit usually being one of the most power-hungry parts of an RIS. Reasonably efficient RIS implementations today have a total power consumption of around a few watts to a dozen watts during the switching state of reconfiguration, and much less in the idle state.

Engineers use simulations to decide where to deploy RIS nodes

To deploy RIS nodes in a real-world network, researchers must first answer three questions: How many RIS nodes are needed? Where should they be placed? And how big should the surfaces be? As you might expect, there are complicated calculations and trade-offs.

Engineers can identify the best RIS positions by planning for them when the base station is designed. Or it can be done afterward by identifying, in the coverage map, the areas of poor signal strength. As for the size of the surfaces, that will depend on the frequencies (lower frequencies require larger surfaces) as well as the number of surfaces being deployed.

To optimize the network’s performance, researchers rely on simulations and measurements. At Huawei Sweden, where I work, we’ve had a lot of discussions about the best placement of RIS units in urban environments. We’re using a proprietary platform, called the Coffee Grinder Simulator, to simulate an RIS installation prior to its construction and deployment. We’re partnering with CNRS Research and CentraleSupélec, both in France, among others.

In a recent project, we used simulations to quantify the performance improvement gained when multiple RIS were deployed in a typical urban 5G network. As far as we know, this was the first large-scale, system-level attempt to gauge RIS performance in that setting. We optimized the RIS-augmented wireless coverage through the use of efficient deployment algorithms that we developed. Given the locations of the base stations and the users, the algorithms were designed to help us select the optimal three-dimensional locations and sizes of the RIS nodes from among thousands of possible positions on walls, roofs, corners, and so on. The output of the software is an RIS deployment map that maximizes the number of users able to receive a target signal.

An array of electronic devices sits atop a supporting structure.

An experimental reconfigurable intelligent surface with 2,304 unit cells was tested at Tsinghua University, in Beijing, last year.

Tsinghua University

Of course, the users of special interest are those at the edges of the cell-coverage area, who have the worst signal reception. Our results showed big improvements in coverage and data rates at the cell edges—and also for users with decent signal reception, especially in the millimeter band.

We also investigated how potential RIS hardware trade-offs affect performance. Simply put, every RIS design requires compromises—such as digitizing the responses of each unit cell into binary phases and amplitudes—in order to construct a less complex and cheaper RIS. But it’s important to know whether a design compromise will create additional beams to undesired directions or cause interference to other users. That’s why we studied the impact of network interference due to multiple base stations, reradiated waves by the RIS, and other factors.

Not surprisingly, our simulations confirmed that both larger RIS surfaces and larger numbers of them improved overall performance. But which is preferable? When we factored in the costs of the RIS nodes and the base stations, we found that in general a smaller number of larger RIS nodes, deployed further from a base station and its users to provide coverage to a larger area, was a particularly cost-effective solution.

The size and dimensions of the RIS depend on the operating frequency [see illustration below] . We found that a small number of rectangular RIS nodes, each around 4 meters wide for C-band frequencies (3.5 GHz) and around half a meter wide for millimeter-wave band (28 GHz), was a good compromise, and could boost performance significantly in both bands. This was a pleasant surprise: RIS improved signals not only in the millimeter-wave (5G high) band, where coverage problems can be especially acute, but also in the C band (5G mid).

Marios Poulakis


To extend wireless coverage indoors, researchers in Asia are investigating a really intriguing possibility: covering room windows with transparent RIS nodes. Experiments at NTT Docomo and at Southeast and Nanjing universities, both in China, used smart films or smart glass. The films are fabricated from transparent conductive oxides (such as indium tin oxide), graphene, or silver nanowires and do not noticeably reduce light transmission. When the films are placed on windows, signals coming from outside can be refracted and boosted as they pass into a building, enhancing the coverage inside.

What will it take to make RIS nodes intelligent?

Planning and installing the RIS nodes is only part of the challenge. For an RIS node to work optimally, it needs to have a configuration, moment by moment, that is appropriate for the state of the communication channel in the instant the node is being used. The best configuration requires an accurate and instantaneous estimate of the channel. Technicians can come up with such an estimate by measuring the “channel impulse response” between the base station, the RIS, and the users. This response is measured using pilots, which are reference signals known beforehand by both the transmitter and the receiver. It’s a standard technique in wireless communications. Based on this estimation of the channel, it’s possible to calculate the phase shifts for each unit cell in the RIS.

The current approaches perform these calculations at the base station. However, that requires a huge number of pilots, because every unit cell needs its own phase configuration. There are various ideas for reducing this overhead, but so far none of them are really promising.

The total calculated configuration for all of the unit cells is fed to each RIS node through a wireless control link. So each RIS node needs a wireless receiver to periodically collect the instructions. This of course consumes power, and it also means that the RIS nodes are fully dependent on the base station, with unavoidable—and unaffordable—overhead and the need for continuous control. As a result, the whole system requires a flawless and complex orchestration of base stations and multiple RIS nodes via the wireless-control channels.

We need a better way. Recall that the “I” in RIS stands for intelligent. The word suggests real-time, dynamic control of the surface from within the node itself—the ability to learn, understand, and react to changes. We don’t have that now. Today’s RIS nodes cannot perceive, reason, or respond; they only execute remote orders from the base station. That’s why my colleagues and I at Huawei have started working on a project we call Autonomous RIS (AutoRIS). The goal is to enable the RIS nodes to autonomously control and configure the phase shifts of their unit cells. That will largely eliminate the base-station-based control and the massive signaling that either limit the data-rate gains from using RIS, or require synchronization and additional power consumption at the nodes. The success of AutoRIS might very well help determine whether RIS will ever be deployed commercially on a large scale.

Of course, it’s a rather daunting challenge to integrate into an RIS node the necessary receiving and processing capabilities while keeping the node lightweight and low power. In fact, it will require a huge research effort. For RIS to be commercially competitive, it will have to preserve its low-power nature.

With that in mind, we are now exploring the integration of an ultralow-power AI chip in an RIS, as well as the use of extremely efficient machine-learning models to provide the intelligence. These smart models will be able to produce the output RIS configuration based on the received data about the channel, while at the same time classifying users according to their contracted services and their network operator. Integrating AI into the RIS will also enable other functions, such as dynamically predicting upcoming RIS configurations and grouping users by location or other behavioral characteristics that affect the RIS operation.

Intelligent, autonomous RIS won’t be necessary for all situations. For some areas, a static RIS, with occasional reconfiguration—perhaps a couple of times per day or less—will be entirely adequate. In fact, there will undoubtedly be a range of deployments from static to fully intelligent and autonomous. Success will depend on not just efficiency and high performance but also ease of integration into an existing network.

6G promises to unleash staggering amounts of bandwidth—but only if we can surmount a potentially ruinous range problem.

The real test case for RIS will be 6G. The coming generation of wireless is expected to embrace autonomous networks and smart environments with real-time, flexible, software-defined, and adaptive control. Compared with 5G, 6G is expected to provide much higher data rates, greater coverage, lower latency, more intelligence, and sensing services of much higher accuracy. At the same time, a key driver for 6G is sustainability—we’ll need more energy-efficient solutions to achieve the “net zero” emission targets that many network operators are striving for. RIS fits all of those imperatives.

Start with massive MIMO, which stands for multiple-input multiple-output. This foundational 5G technique uses multiple antennas packed into an array at both the transmitting and receiving ends of wireless channels, to send and receive many signals at once and thus dramatically boost network capacity. However, the desire for higher data rates in 6G will demand even more massive MIMO, which will require many more radio-frequency chains to work and will be power-hungry and costly to operate. An energy-efficient and less costly alternative will be to place multiple low-power RIS nodes between massive MIMO base stations and users as we have described in this article.

The millimeter-wave and subterahertz 6G bands promise to unleash staggering amounts of bandwidth, but only if we can surmount a potentially ruinous range problem without resorting to costly solutions, such as ultradense deployments of base stations or active repeaters. My opinion is that only RIS will be able to make these frequency bands commercially viable at a reasonable cost.

The communications industry is already touting sensing—high-accuracy localization services as well as object detection and posture recognition—as an important possible feature for 6G. Sensing would also enhance performance. For example, highly accurate localization of users will help steer wireless beams efficiently. Sensing could also be offered as a new network service to vertical industries such as smart factories and autonomous driving, where detection of people or cars could be used for mapping an environment; the same capability could be used for surveillance in a home-security system. The large aperture of RIS nodes and their resulting high resolution mean that such applications will be not only possible but probably even cost effective.

And the sky is not the limit. RIS could enable the integration of satellites into 6G networks. Typically, a satellite uses a lot of power and has large antennas to compensate for the long-distance propagation losses and for the modest capabilities of mobile devices on Earth. RIS could play a big role in minimizing those limitations and perhaps even allowing direct communication from satellite to 6G users. Such a scheme could lead to more efficient satellite-integrated 6G networks.

As it transitions into new services and vast new frequency regimes, wireless communications will soon enter a period of great promise and sobering challenges. Many technologies will be needed to usher in this next exciting phase. None will be more essential than reconfigurable intelligent surfaces.

The author wishes to acknowledge the help of Ulrik Imberg in the writing of this article.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}