The Net Neutrality Debate Hinges on Defining the Internet

In the U.S., it boils down to whether the Internet is a telecommunications service or an information service

3 min read
A graphic representation of the complexity of how the Internet works
Illustration: Akindo

The day before Thanksgiving in the U.S., the long-simmering net neutrality debate—over whether or not data transmitted on the Internet should be treated equally by Internet service providers—erupted again. It was sparked by the U.S. Federal Communications Commission’s release of a draft of the agency’s Restoring Internet Freedom [PDF] proposal, which would repeal much of a 2015 ruling in favor of net neutrality by classifying the Internet as a telecommunications service.

Before 2015, the Internet had been treated as an information service in the U.S. While both telecommunications services and information services are inextricably tied to the movement of information, the subtle differences between them are important enough that they are subjected to vastly different regulations. And many of those rules stem from much older legislation.

The Communications Act of 1934 [PDF] pulled together the earlier telephone and radio legislation into the country’s first comprehensive piece of communications legislation. At the same time, it handed control of telecommunications from the Interstate Commerce Commission over to the newly-minted FCC.

The Communications Act also laid out definitions of the two terms that have caused the extensive back-and-forth debate ever since the Internet’s inception, which reached a crescendo with FCC Commissioner Ajit Pai’s current proposal.

First, it helps to understand the Communication Act’s definition of telecommunications. The definition—which hasn’t changed in the 83 years since it was first stated—is the “transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received.”

There are two key takeaways there. First, the information travels to a destination of the user’s choice. This could be something like making a phone call to your neighbor down the road over a landline. You know where the information—the content of the conversation—is traveling. With mobile phones, that same concept becomes a bit more abstract. After all, you don’t know where the other person is when you call them. You may not know the physical endpoint of the call, but that’s okay, because the phone number itself serves as the specified endpoint.

Second, transmitted information is unaltered. This seems straightforward—after all, you wouldn’t expect only every other word in your phone conversation to make it—but this point gets to the heart of the debate about how to classify the Internet.

From the telecommunications definition that the 1934 act provides, it’s clear that a telecommunications service provides unaltered transmission of a user’s information.

At this point, you might say that definition sounds quite a bit like the Internet—you request information from a website, and it sends the information to you. And you’d be right, which is why in 2015, the FCC voted to regulate the Internet in this way.

When the FCC voted in 2015 to regulate Internet service providers as common carriers—transmitters of all information equally, regardless of source—they were classifying the Internet itself as a telecommunications service. In essence, ISPs would not have the authority to alter the transmission or content of the information passing through their network, or prioritize any information over the rest.

But the Internet is a complicated beast. While on first glance, the Internet seems like it operates on the simple transmission of data, in practicality, it functions a lot like an information service.

According to the 1934 act, information services, unlike telecommunications services, do allow for information being transmitted to be manipulated alongthe way.

Today’s Internet would be virtually impossible to use without the contributions of services like the Domain Name System. The DNS eases our ability to navigate to webpages by translating their addresses—a unique string of numbers—into text. You could argue, as the current FCC proposal does, that this is a way of transforming or processing the information being transmitted.

The Internet also relies extensively on caching, where commonly-requested information is stored on the requesting device or other network components for quick retrieval. Because of this, it’s not always clear from where all the data to fulfill a request is coming.

Things like the DNS and caching are why the Internet looks a lot like an information service. Data is manipulated and processed and generated throughout the network to meet users’ requests. It’s far more complicated than the simple transmission of a phone call.

Following a 2005 case, the Supreme Court recognizes that it is the FCC’s authority to classify and regulate the Internet in whichever way makes the most sense to the agency, so long as it can provide a reasonable argument for its case. And FCC commissioners may argue that things like DNS and caching are fundamental to how the Internet works (like the FCC is currently saying), or that they aren’t (like the FCC said in 2015).

So until Congress more firmly defines what the Internet is—in their most recent communications legislation in 1996, they referred to it as an information service while ultimately deferring to the FCC’s authority to make decisions down the road—the argument over how to regulate the Internet will likely bounce back and forth.

The Conversation (0)

Metamaterials Could Solve One of 6G’s Big Problems

There’s plenty of bandwidth available if we use reconfigurable intelligent surfaces

12 min read
An illustration depicting cellphone users at street level in a city, with wireless signals reaching them via reflecting surfaces.

Ground level in a typical urban canyon, shielded by tall buildings, will be inaccessible to some 6G frequencies. Deft placement of reconfigurable intelligent surfaces [yellow] will enable the signals to pervade these areas.

Chris Philpot

For all the tumultuous revolution in wireless technology over the past several decades, there have been a couple of constants. One is the overcrowding of radio bands, and the other is the move to escape that congestion by exploiting higher and higher frequencies. And today, as engineers roll out 5G and plan for 6G wireless, they find themselves at a crossroads: After years of designing superefficient transmitters and receivers, and of compensating for the signal losses at the end points of a radio channel, they’re beginning to realize that they are approaching the practical limits of transmitter and receiver efficiency. From now on, to get high performance as we go to higher frequencies, we will need to engineer the wireless channel itself. But how can we possibly engineer and control a wireless environment, which is determined by a host of factors, many of them random and therefore unpredictable?

Perhaps the most promising solution, right now, is to use reconfigurable intelligent surfaces. These are planar structures typically ranging in size from about 100 square centimeters to about 5 square meters or more, depending on the frequency and other factors. These surfaces use advanced substances called metamaterials to reflect and refract electromagnetic waves. Thin two-dimensional metamaterials, known as metasurfaces, can be designed to sense the local electromagnetic environment and tune the wave’s key properties, such as its amplitude, phase, and polarization, as the wave is reflected or refracted by the surface. So as the waves fall on such a surface, it can alter the incident waves’ direction so as to strengthen the channel. In fact, these metasurfaces can be programmed to make these changes dynamically, reconfiguring the signal in real time in response to changes in the wireless channel. Think of reconfigurable intelligent surfaces as the next evolution of the repeater concept.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}