A Case Against Net Neutrality

Allowing ISPs to throttle and prioritize network traffic could improve the user experience—but we need better ways to monitor such behavior

3 min read
Illustration of Net Neutrality showing tangled internet pipelines, with a negative impression.
Illustration: IEEE Spectrum; Icons: Getty Images

This is a guest post. The views expressed in this article are solely those of the blogger and do not represent positions of IEEE Spectrum or the IEEE.

The U.S. Federal Communications Commission's proposal to roll back the previous administration's Open Internet Order has put network neutrality back in the news. The FCC's new order, titled Restoring Internet Freedom, removes the bright-line rules enforced by the previous order. If the FCC passes its new order tomorrow, Internet service providers (ISPs) will no longer be prohibited from blocking, throttling, and prioritizing traffic.

A widely expressed concern about the FCC's new proposal is that permitting ISPs to create fast lanes—in other words, letting ISPs charge content providers for delivering their traffic to users at a certain speed or quality— will jeopardize long-term innovation. While large content providers such as Google, Facebook, and Netflix can afford to pay ISPs for that service, a new company probably won't be able to. Allowing ISPs to throttle traffic if content providers don't pay up will make it more difficult for startups to compete with large companies.

However, an aspect of network operations that is often ignored in the popular network neutrality debate is this: It is, in fact, desirable that ISPs not always be neutral in handling network traffic.

As an example, consider two users whose Internet traffic goes through the same congested link. If one user is streaming video and another is backing up data to the cloud, a perfectly neutral network would slow down both transfers. Most people would probably agree that, to create the best experience for the most users, it would be best to slightly slow down noninteractive traffic such as data backups, and free up bandwidth for videos and voice-over-IP calls.

Both the Open Internet Order from 2015 as well as the proposed order on Restoring Internet Freedom recognize the need for ISPs to manage their networks. The difference lies in how the two orders account for network management techniques.

The Obama-era FCC administration required that ISPs be prepared to present evidence—for example, in the form of performance measurements of their networks—to prove that they are managing their networks in a reasonable way. In contrast, Trump's FCC administration believes that this regulation places an undue burden on ISPs.

It is, in fact, desirable that ISPs not always be neutral in handling network traffic.

Instead, the FCC's new proposal merely requires ISPs to be transparent and publicly disclose how they manage their networks, so that customers can choose to take their business elsewhere if they are put off by their ISP's practices.

The challenge in placing the onus on ISPs to be transparent is that external observers have no good way to verify an ISP's claims. For example, it would be reasonable to expect that an ISP would resort to throttling some types of traffic only when its network is congested. When there is plentiful bandwidth to spare, there's no reason to throttle legitimate traffic.

However, if an ISP violates this expectation, detecting such a violation would be hard. The research community has developed increasingly sophisticated measurement techniques over the years to make inferences about the Internet, but it's still difficult to accurately identify when a network link is congested.

Thus, we must answer a key question in order to resolve the debate on network neutrality: How can we legally define the permissible ways an ISP could throttle or prioritize traffic in a manner that does not place undue burden on ISPs, yet is verifiable by third parties?

Only then will we be assured of having an Internet that is neutral when it can afford to be and is nonneutral only to the extent it's necessary to improve a user's experience.

The Conversation (0)

Metamaterials Could Solve One of 6G’s Big Problems

There’s plenty of bandwidth available if we use reconfigurable intelligent surfaces

12 min read
An illustration depicting cellphone users at street level in a city, with wireless signals reaching them via reflecting surfaces.

Ground level in a typical urban canyon, shielded by tall buildings, will be inaccessible to some 6G frequencies. Deft placement of reconfigurable intelligent surfaces [yellow] will enable the signals to pervade these areas.

Chris Philpot

For all the tumultuous revolution in wireless technology over the past several decades, there have been a couple of constants. One is the overcrowding of radio bands, and the other is the move to escape that congestion by exploiting higher and higher frequencies. And today, as engineers roll out 5G and plan for 6G wireless, they find themselves at a crossroads: After years of designing superefficient transmitters and receivers, and of compensating for the signal losses at the end points of a radio channel, they’re beginning to realize that they are approaching the practical limits of transmitter and receiver efficiency. From now on, to get high performance as we go to higher frequencies, we will need to engineer the wireless channel itself. But how can we possibly engineer and control a wireless environment, which is determined by a host of factors, many of them random and therefore unpredictable?

Perhaps the most promising solution, right now, is to use reconfigurable intelligent surfaces. These are planar structures typically ranging in size from about 100 square centimeters to about 5 square meters or more, depending on the frequency and other factors. These surfaces use advanced substances called metamaterials to reflect and refract electromagnetic waves. Thin two-dimensional metamaterials, known as metasurfaces, can be designed to sense the local electromagnetic environment and tune the wave’s key properties, such as its amplitude, phase, and polarization, as the wave is reflected or refracted by the surface. So as the waves fall on such a surface, it can alter the incident waves’ direction so as to strengthen the channel. In fact, these metasurfaces can be programmed to make these changes dynamically, reconfiguring the signal in real time in response to changes in the wireless channel. Think of reconfigurable intelligent surfaces as the next evolution of the repeater concept.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}