The U.S. Federal Trade Commission is wrong to claim broad authority to seek sanctions against companies for data breaches when it has no clearly defined data security standards, said panelists at a forum sponsored by Tech Freedom, a Washington, D.C., think tank that regularly rails against government regulation.

The event, held on Thursday, coalesced around the fact that in the last decade, the FTC has settled nearly four dozen cases after filing complaints based on its reasoning that a failure to have sufficient data security constitutes an unfair or deceptive trade practice. Two pending court cases, says a Tech Freedom statement, "may finally allow the courts to rule on the legal validity of what the FTC calls its 'common law of settlements.'"

One of the FTC critics speaking at the forum was Mike Daugherty, CEO of Atlanta-based diagnostic lab LabMD. The company is currently in the agency’s crosshairs but is fighting back. According to FTC, somehow, a spreadsheet in LabMD's possession containing Social Security numbers, dates of birth, health insurance provider information, and standardized medical treatment codes and other information for more than 9000 patients ended up on a peer-to-peer file-sharing network in 2008. That and another LabMD security lapse wherein 500 customer records were lost to identity thieves last year triggered the agency to file a complaint.

Those facts notwithstanding, the company maintains that the complaint wasn't based on established rules. According to a Computer World article, Daugherty said use of Section 5 of the FTC Act, which allows the agency to take action to prevent or punish unfair or deceptive business practices, is a huge overreach. “If you want to upset [FTC officials], ask them what the standards are," Daugherty said. He incredulously asked, "You mean you can make them up as you go along?”

The forum participants agreed that, the U.S. Congress needs to step in and pass legislation that gives the FTC or some other federal agency a specific mandate for such action and rules to follow. What becomes of that argument may be determined by the outcome of the upcoming court case.

Bruce Schneier on Combating the NSA’s War on Data Security

Bruce Schneier, internationally renowned security technologist and author of the influential newsletter "Crypto-Gram" and the blog "Schneier on Security," sits down for a conversation about revelations of the NSA’s efforts to subvert and weaken cryptographic algorithms, security products, and standards. In the podcast, Schneier, author of books including Liars and Outliers: Enabling the Trust Society Needs to Survive, talks about what it will take to help defeat the capabilities the NSA has developed. The NSA isn’t even doing it through sleuthing or some ultra-advanced mathematical techniques. It’s mainly setting up agreements with software vendors who deliberately weaken security protocols such as SSL and VPNs in a way known only to the NSA.

Why would a company acquiesce to the government in this way? Schneier says that the NSA can ask nicely (while holding a club in its hand in the form of threats to withhold government contracts). It can also force a firm to play ball by sending it a National Security Letter demanding cooperation as well as the company’s silence about what it is being told to do to its unsuspecting customers. And the agency is not above placing a covert agent inside a company to surreptitiously weaken products. “It validates all the paranoia,” Schneier said. “We now can’t trust anything. It’s possible that they’ve done this to only half the protocols on the Internet. But which half? How do you know? You don’t. If a company says, ‘It’s not us,’ you can’t trust them. The CEO might not know [if its cryptography has been weakened by the NSA].”

Pwn2Own Part II: The Researchers Hack Back

HP TippingPoint, whose ZDI bug bounty program pays researchers to spot vulnerabilities so it can do an even better job of protecting customers against as-yet-unpatched security holes, is yet again putting its money where its mouth is. It announced this week on its company blog that it and its co-sponsors, Google and BlackBerry, are putting up US $300 000 in prize money for a hacking contest challenging researchers to demonstrate successful attacks against mobile services and browsers. The Mobile Pwn2Own contest will take place in Tokyo on 13 and 14 November. The first researcher or team to hack a phone's baseband processor will walk away with $100,000.  The contest’s rules require that researchers disclose details of the vulnerabilities they leveraged as well as the exploit techniques used to hack the device, service or operating system.

Big money will still be available for hacking a mobile browser ($40,000, but $50,000 for Chrome on Android running on a Nexus 4 or Samsung Galaxy SO); a mobile operating system ($40,000); a message service such as SMS ($70,000); or a short-distance linking technology, like Bluetooth or NFC ($50,000).”

The researchers can chose the wall they’ll attempt to scale; the list of eligible devices to be picked apart includes Apple's iPhone 5 and iPad Mini, Google's Nexus 4 smartphone and Nexus 7 tablet, Nokia's Lumia 1020, and Samsung's Galaxy S4 smartphone.

Contractor Steals Data on 2 Million Vodafone Customers

German police and security experts have informed Vodafone customers that a contractor accessed a database inside the telecom giant’s network and made off with customer names, addresses, birth dates and bank account numbers among other personal data for as many as two million customers. Ouch. Though the authorities have a suspect in custody, that provides no assurances about who has gained access to the data and what plans they have for it.

A Kaspersky Threatpost article notes that “Vodafone delayed disclosing the breach in order to give authorities time to investigate.” Meanwhile, Vodafone released a statement describing the activities it has been engaged in subsequent to the horse leaving the barn. Administrators’ passwords have been changed, digital certificates updated, and the server from which the data was pilfered wiped, the company said.

And In Other Cybercrime News…

Is Cybercrime in Russia Actually Declining

E-Mail Spam Campaign Spreads Android Malware to Smartphones

Twelve Arrested in Plot to Rob London Bank Remotely Using KVM Device Installed on a Computer at a Local Branch

Image: Danil Melekhin/iStockphoto

The Conversation (0)

Metamaterials Could Solve One of 6G’s Big Problems

There’s plenty of bandwidth available if we use reconfigurable intelligent surfaces

12 min read
An illustration depicting cellphone users at street level in a city, with wireless signals reaching them via reflecting surfaces.

Ground level in a typical urban canyon, shielded by tall buildings, will be inaccessible to some 6G frequencies. Deft placement of reconfigurable intelligent surfaces [yellow] will enable the signals to pervade these areas.

Chris Philpot

For all the tumultuous revolution in wireless technology over the past several decades, there have been a couple of constants. One is the overcrowding of radio bands, and the other is the move to escape that congestion by exploiting higher and higher frequencies. And today, as engineers roll out 5G and plan for 6G wireless, they find themselves at a crossroads: After years of designing superefficient transmitters and receivers, and of compensating for the signal losses at the end points of a radio channel, they’re beginning to realize that they are approaching the practical limits of transmitter and receiver efficiency. From now on, to get high performance as we go to higher frequencies, we will need to engineer the wireless channel itself. But how can we possibly engineer and control a wireless environment, which is determined by a host of factors, many of them random and therefore unpredictable?

Perhaps the most promising solution, right now, is to use reconfigurable intelligent surfaces. These are planar structures typically ranging in size from about 100 square centimeters to about 5 square meters or more, depending on the frequency and other factors. These surfaces use advanced substances called metamaterials to reflect and refract electromagnetic waves. Thin two-dimensional metamaterials, known as metasurfaces, can be designed to sense the local electromagnetic environment and tune the wave’s key properties, such as its amplitude, phase, and polarization, as the wave is reflected or refracted by the surface. So as the waves fall on such a surface, it can alter the incident waves’ direction so as to strengthen the channel. In fact, these metasurfaces can be programmed to make these changes dynamically, reconfiguring the signal in real time in response to changes in the wireless channel. Think of reconfigurable intelligent surfaces as the next evolution of the repeater concept.

The key feature of RIS that makes it attractive in comparison with these alternatives is its nearly passive nature. The absence of amplifiers to boost the signal means that an RIS node can be powered with just a battery and a small solar panel.

RIS functions like a very sophisticated mirror, whose orientation and curvature can be adjusted in order to focus and redirect a signal in a specific direction. But rather than physically moving or reshaping the mirror, you electronically alter its surface so that it changes key properties of the incoming electromagnetic wave, such as the phase.

That’s what the metamaterials do. This emerging class of materials exhibits properties beyond (from the Greek meta) those of natural materials, such as anomalous reflection or refraction. The materials are fabricated using ordinary metals and electrical insulators, or dielectrics. As an electromagnetic wave impinges on a metamaterial, a predetermined gradient in the material alters the phase and other characteristics of the wave, making it possible to bend the wave front and redirect the beam as desired.

An RIS node is made up of hundreds or thousands of metamaterial elements called unit cells. Each cell consists of metallic and dielectric layers along with one or more switches or other tunable components. A typical structure includes an upper metallic patch with switches, a biasing layer, and a metallic ground layer separated by dielectric substrates. By controlling the biasing—the voltage between the metallic patch and the ground layer—you can switch each unit cell on or off and thus control how each cell alters the phase and other characteristics of an incident wave.

To control the direction of the larger wave reflecting off the entire RIS, you synchronize all the unit cells to create patterns of constructive and destructive interference in the larger reflected waves [ see illustration below]. This interference pattern reforms the incident beam and sends it in a particular direction determined by the pattern. This basic operating principle, by the way, is the same as that of a phased-array radar.

Beamforming by constructive and destructive interference

Erik Vrielink

A reconfigurable intelligent surface comprises an array of unit cells. In each unit cell, a metamaterial alters the phase of an incoming radio wave, so that the resulting waves interfere with one another [above, top]. Precisely controlling the patterns of this constructive and destructive interference allows the reflected wave to be redirected [bottom], improving signal coverage.

An RIS has other useful features. Even without an amplifier, an RIS manages to provide substantial gain—about 30 to 40 decibels relative to isotropic (dBi)—depending on the size of the surface and the frequency. That’s because the gain of an antenna is proportional to the antenna’s aperture area. An RIS has the equivalent of many antenna elements covering a large aperture area, so it has higher gain than a conventional antenna does.

All the many unit cells in an RIS are controlled by a logic chip, such as a field-programmable gate array with a microcontroller, which also stores the many coding sequences needed to dynamically tune the RIS. The controller gives the appropriate instructions to the individual unit cells, setting their state. The most common coding scheme is simple binary coding, in which the controller toggles the switches of each unit cell on and off. The unit-cell switches are usually semiconductor devices, such as PIN diodes or field-effect transistors.

The important factors here are power consumption, speed, and flexibility, with the control circuit usually being one of the most power-hungry parts of an RIS. Reasonably efficient RIS implementations today have a total power consumption of around a few watts to a dozen watts during the switching state of reconfiguration, and much less in the idle state.

Engineers use simulations to decide where to deploy RIS nodes

To deploy RIS nodes in a real-world network, researchers must first answer three questions: How many RIS nodes are needed? Where should they be placed? And how big should the surfaces be? As you might expect, there are complicated calculations and trade-offs.

Engineers can identify the best RIS positions by planning for them when the base station is designed. Or it can be done afterward by identifying, in the coverage map, the areas of poor signal strength. As for the size of the surfaces, that will depend on the frequencies (lower frequencies require larger surfaces) as well as the number of surfaces being deployed.

To optimize the network’s performance, researchers rely on simulations and measurements. At Huawei Sweden, where I work, we’ve had a lot of discussions about the best placement of RIS units in urban environments. We’re using a proprietary platform, called the Coffee Grinder Simulator, to simulate an RIS installation prior to its construction and deployment. We’re partnering with CNRS Research and CentraleSupélec, both in France, among others.

In a recent project, we used simulations to quantify the performance improvement gained when multiple RIS were deployed in a typical urban 5G network. As far as we know, this was the first large-scale, system-level attempt to gauge RIS performance in that setting. We optimized the RIS-augmented wireless coverage through the use of efficient deployment algorithms that we developed. Given the locations of the base stations and the users, the algorithms were designed to help us select the optimal three-dimensional locations and sizes of the RIS nodes from among thousands of possible positions on walls, roofs, corners, and so on. The output of the software is an RIS deployment map that maximizes the number of users able to receive a target signal.

An array of electronic devices sits atop a supporting structure.

An experimental reconfigurable intelligent surface with 2,304 unit cells was tested at Tsinghua University, in Beijing, last year.

Tsinghua University

Of course, the users of special interest are those at the edges of the cell-coverage area, who have the worst signal reception. Our results showed big improvements in coverage and data rates at the cell edges—and also for users with decent signal reception, especially in the millimeter band.

We also investigated how potential RIS hardware trade-offs affect performance. Simply put, every RIS design requires compromises—such as digitizing the responses of each unit cell into binary phases and amplitudes—in order to construct a less complex and cheaper RIS. But it’s important to know whether a design compromise will create additional beams to undesired directions or cause interference to other users. That’s why we studied the impact of network interference due to multiple base stations, reradiated waves by the RIS, and other factors.

Not surprisingly, our simulations confirmed that both larger RIS surfaces and larger numbers of them improved overall performance. But which is preferable? When we factored in the costs of the RIS nodes and the base stations, we found that in general a smaller number of larger RIS nodes, deployed further from a base station and its users to provide coverage to a larger area, was a particularly cost-effective solution.

The size and dimensions of the RIS depend on the operating frequency [see illustration below] . We found that a small number of rectangular RIS nodes, each around 4 meters wide for C-band frequencies (3.5 GHz) and around half a meter wide for millimeter-wave band (28 GHz), was a good compromise, and could boost performance significantly in both bands. This was a pleasant surprise: RIS improved signals not only in the millimeter-wave (5G high) band, where coverage problems can be especially acute, but also in the C band (5G mid).

Marios Poulakis


To extend wireless coverage indoors, researchers in Asia are investigating a really intriguing possibility: covering room windows with transparent RIS nodes. Experiments at NTT Docomo and at Southeast and Nanjing universities, both in China, used smart films or smart glass. The films are fabricated from transparent conductive oxides (such as indium tin oxide), graphene, or silver nanowires and do not noticeably reduce light transmission. When the films are placed on windows, signals coming from outside can be refracted and boosted as they pass into a building, enhancing the coverage inside.

What will it take to make RIS nodes intelligent?

Planning and installing the RIS nodes is only part of the challenge. For an RIS node to work optimally, it needs to have a configuration, moment by moment, that is appropriate for the state of the communication channel in the instant the node is being used. The best configuration requires an accurate and instantaneous estimate of the channel. Technicians can come up with such an estimate by measuring the “channel impulse response” between the base station, the RIS, and the users. This response is measured using pilots, which are reference signals known beforehand by both the transmitter and the receiver. It’s a standard technique in wireless communications. Based on this estimation of the channel, it’s possible to calculate the phase shifts for each unit cell in the RIS.

The current approaches perform these calculations at the base station. However, that requires a huge number of pilots, because every unit cell needs its own phase configuration. There are various ideas for reducing this overhead, but so far none of them are really promising.

The total calculated configuration for all of the unit cells is fed to each RIS node through a wireless control link. So each RIS node needs a wireless receiver to periodically collect the instructions. This of course consumes power, and it also means that the RIS nodes are fully dependent on the base station, with unavoidable—and unaffordable—overhead and the need for continuous control. As a result, the whole system requires a flawless and complex orchestration of base stations and multiple RIS nodes via the wireless-control channels.

We need a better way. Recall that the “I” in RIS stands for intelligent. The word suggests real-time, dynamic control of the surface from within the node itself—the ability to learn, understand, and react to changes. We don’t have that now. Today’s RIS nodes cannot perceive, reason, or respond; they only execute remote orders from the base station. That’s why my colleagues and I at Huawei have started working on a project we call Autonomous RIS (AutoRIS). The goal is to enable the RIS nodes to autonomously control and configure the phase shifts of their unit cells. That will largely eliminate the base-station-based control and the massive signaling that either limit the data-rate gains from using RIS, or require synchronization and additional power consumption at the nodes. The success of AutoRIS might very well help determine whether RIS will ever be deployed commercially on a large scale.

Of course, it’s a rather daunting challenge to integrate into an RIS node the necessary receiving and processing capabilities while keeping the node lightweight and low power. In fact, it will require a huge research effort. For RIS to be commercially competitive, it will have to preserve its low-power nature.

With that in mind, we are now exploring the integration of an ultralow-power AI chip in an RIS, as well as the use of extremely efficient machine-learning models to provide the intelligence. These smart models will be able to produce the output RIS configuration based on the received data about the channel, while at the same time classifying users according to their contracted services and their network operator. Integrating AI into the RIS will also enable other functions, such as dynamically predicting upcoming RIS configurations and grouping users by location or other behavioral characteristics that affect the RIS operation.

Intelligent, autonomous RIS won’t be necessary for all situations. For some areas, a static RIS, with occasional reconfiguration—perhaps a couple of times per day or less—will be entirely adequate. In fact, there will undoubtedly be a range of deployments from static to fully intelligent and autonomous. Success will depend on not just efficiency and high performance but also ease of integration into an existing network.

6G promises to unleash staggering amounts of bandwidth—but only if we can surmount a potentially ruinous range problem.

The real test case for RIS will be 6G. The coming generation of wireless is expected to embrace autonomous networks and smart environments with real-time, flexible, software-defined, and adaptive control. Compared with 5G, 6G is expected to provide much higher data rates, greater coverage, lower latency, more intelligence, and sensing services of much higher accuracy. At the same time, a key driver for 6G is sustainability—we’ll need more energy-efficient solutions to achieve the “net zero” emission targets that many network operators are striving for. RIS fits all of those imperatives.

Start with massive MIMO, which stands for multiple-input multiple-output. This foundational 5G technique uses multiple antennas packed into an array at both the transmitting and receiving ends of wireless channels, to send and receive many signals at once and thus dramatically boost network capacity. However, the desire for higher data rates in 6G will demand even more massive MIMO, which will require many more radio-frequency chains to work and will be power-hungry and costly to operate. An energy-efficient and less costly alternative will be to place multiple low-power RIS nodes between massive MIMO base stations and users as we have described in this article.

The millimeter-wave and subterahertz 6G bands promise to unleash staggering amounts of bandwidth, but only if we can surmount a potentially ruinous range problem without resorting to costly solutions, such as ultradense deployments of base stations or active repeaters. My opinion is that only RIS will be able to make these frequency bands commercially viable at a reasonable cost.

The communications industry is already touting sensing—high-accuracy localization services as well as object detection and posture recognition—as an important possible feature for 6G. Sensing would also enhance performance. For example, highly accurate localization of users will help steer wireless beams efficiently. Sensing could also be offered as a new network service to vertical industries such as smart factories and autonomous driving, where detection of people or cars could be used for mapping an environment; the same capability could be used for surveillance in a home-security system. The large aperture of RIS nodes and their resulting high resolution mean that such applications will be not only possible but probably even cost effective.

And the sky is not the limit. RIS could enable the integration of satellites into 6G networks. Typically, a satellite uses a lot of power and has large antennas to compensate for the long-distance propagation losses and for the modest capabilities of mobile devices on Earth. RIS could play a big role in minimizing those limitations and perhaps even allowing direct communication from satellite to 6G users. Such a scheme could lead to more efficient satellite-integrated 6G networks.

As it transitions into new services and vast new frequency regimes, wireless communications will soon enter a period of great promise and sobering challenges. Many technologies will be needed to usher in this next exciting phase. None will be more essential than reconfigurable intelligent surfaces.

The author wishes to acknowledge the help of Ulrik Imberg in the writing of this article.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}