Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

The End of the Middle

Pushing network intelligence out to its edges is causing the phone industry to fail--and that's okay

6 min read

This is part of IEEE Spectrum's special report: What's Wrong—What's Next: 2003 Technology Forecast & Review.

Technological changes affecting telephony are so profound that the end of the large incumbent telephone companies is at hand. They will fail or become unrecognizable, as telephone services are embedded in a more general-purpose network—the Internet.

The U.S. Telecommunications Act of 1996 was designed to cope with these changes in technology, but it has been an utter failure, in large part because it did not apprehend a pivotal architectural change, the End-to-End Principle.

Under this principle, the network does nothing more than deliver bits from point A to point B. The result is a general network that is not constrained by any one application. In contrast, the first communications technologies were designed for special purposes: telephony, television, and data communications, each with its own specialized network. For a new service to become available—touch-tone dialing, for example, or caller-ID—the specialized switching equipment in the telephone central offices and substations had to be reprogrammed or even replaced.

Zeroing in on the endpoints

The original design of the Internet was not end-to-end, either. It followed older design principles born in an age of expensive computing logic. But as the power of microprocessors doubled again and again and again, the ability of the devices at the endpoints to run sophisticated communications protocols, such as error correction, file and mail transport, and encryption, grew.

Without this radical reduction in the cost of computing, the machinery to run communications protocols would be so expensive that they would have to remain at the center of the network, controlled by a centralized authority. The alternative, an end-to-end network with the equivalent of mainframe computers at the endpoints, would have been prohibitively expensive. Instead, thanks to relentless iteration of Moore's Law, network endpoints today are small, powerful, inexpensive devices, such as PCs, home routers, and media hubs. With such power in the endpoints, the need to embed the functions of a network in the network's core shrank.

The consequences of Moore's Law on network architecture weren't immediately obvious. But in the early 1980s, three leading Internet network architects, Jerome Saltzer, David Reed, and David Clark, proposed that if functions like error-checking could be implemented at the endpoints of the network, the Internet would be more flexible, and a wider variety of applications—even unknown future applications—could be more easily implemented.

The logical conclusion of this argument was the separation into different protocols of functions that had hitherto been bundled together. The first important instance of this new, layered architecture was the separation of the error-checking protocol (the transmission control protocol, or simply TCP) from the Internet protocol (IP). Today, at the IP layer, the network simply looks at packets to see where they need to go. If strong error-checking is needed for an application such as a file transfer, the receiving device will use TCP to examine incoming packets and, if it finds errors, ask the sending machine to retransmit. After all, losing even a single bit of data in a spreadsheet, for example, is a disaster.

It turns out that some applications, though, don't need or want strong error-checking. Voice is one—a few data errors don't significantly impede our ability to hear and understand another person, whereas the delays involved in retransmission of bad data can make fluent conversation impossible. Without the separation between TCP and IP, the Internet would not have been suitable for applications that required weak error-checking.

In other words, without the end-to-end property of the Internet, without the ability to use different error-checking protocols like UDP (user datagram protocol, intended for transmission of small, unsequenced data units) and RTP (real-time protocol, designed specifically to relax error-checking) in place of TCP, applications like Internet telephony and online gaming would be awkward to implement and prohibitively expensive. But because Internet error-checking occurs in the endpoints, programmers are free to substitute protocols that are looser or tighter as their application demands.

The proponents of the end-to-end principle did not envision Internet telephony; they only knew that a network with less, rather than more, functionality in the middle would be more flexible, and more amenable to as-yet-undreamt-of applications.

The Internet, the world's overarching end-to-end network, is now the connectivity medium of communications. Yet telephone company networks are still centralized networks designed for a single application, voice. Phone companies still make more money from voice than from other network traffic, even though the volume of data traffic now exceeds voice. Furthermore, Internet voice is getting better and better—its quality can, in fact, far exceed the "toll quality" voice of plain old telephone service.

In addition, smart end devices can set up and manage telephone calls far better than a centralized network. (Why dial a number when you can double-click on it?) In fact, when voice is implemented in end devices, the ability to mix it into other kinds of interactions—online game play, collaboration, mutual Web surfing, and many more yet to be discovered—the idea of a "call" as a special, discrete event could well disappear.

Playing both ends against the middle

The Internet stands on the brink of making the entire functionality of the telephone company obsolete. But that's not all—with access improvements within the grasp of today's technology, the Internet can do video entertainment better than broadcast, cable, or satellite television can. The Internet stands on the brink of subsuming the value of all existing special-purpose networks.

Telephone companies do not have a business model for running an end-to-end network. They have always depended on voice revenues to subsidize network operations. Now they're faced with the prospect that voice telephony, their cash-cow application, will no longer be bound to their network. (Television and radio face the same prospect.)

None of the winner apps of the past decade were brought to market by telephone companies

It is not likely that the telephone companies will discover how to run a business based on end-to-end networks, because pure end-to-end connectivity will disrupt their single-application business model. As Clayton Christensen points out in The Innovator's Dilemma, incumbents never lead a technology to market when it disrupts the incumbent business model. Exuberant innovation was the rule at Bell Labs, but the only innovations that Ma Bell brought to market were ones that were consistent with the old business model; when AT&T was offered a leadership role in Internet development, it passed—just as Western Union passed on the telephone a century earlier.

None of the networked winner apps of the past decade—e-mail, Web browsing, e-commerce, instant messaging, streaming audio, Web logging, peer-to-peer filesharing, and so on—were brought to market by telephone companies. It's unlikely that telephone companies will field the pioneering communications applications for the new network. This discovery process is better left to more entrepreneurial companies operating in the free market.

What happens now?

There are two main scenarios for telephone company survival. Under the first, they won't; their networks will be managed gradually into oblivion by bankruptcy courts and the regulatory establishment that used to support them. Dial tone continuity would be maintained, but allowed to wither as a business as other alternatives matured. New entities, such as municipalities, other utilities with rights of way, new kinds of companies (especially wireless ones), and maybe even customers themselves, would gain the freedom to discover successful operating models for end-to-end networks.

Under the second scenario, the telephone companies would exercise their remaining competence—their ability to manipulate the various agencies of government to their own advantage—to make it illegal (or prohibitively difficult) to make progress in end-to-end internetworking. Under this second scenario, the Internet would be transformed into a centrally controlled telephone-company-like (and TV-network-like) network for use by approved applications only. In nations that fall victim to this second scenario, those sectors of the economy that depend on progress in communications would devolve to third-world status, while nations that support end-to-end networks race ahead.

No nation can afford the second scenario. We cannot afford to cave in as telephone companies entreat governments to prop up their dying technologies and dying business models.

Distant messages used to be delivered physically, first by sail, then by steam; the telegraph short-circuited those modes before being in turn shouted down by the telephone. We should not allow the established powers of antiquated centralized telephony to block end-to-end internetworking. Telemedicine, distance learning, remote collaboration, multicast news, multiplayer entertainment, and more are waiting in the wings, with yet undiscovered applications promising other new connections and benefits. We should make way for the new end-to-end network.

To Probe Further

Some of this article's ideas are explored further in "The Rise of the Stupid Network," David S. Isenberg, June 1997, at https://isen.com/stupid.html.

The definitive statement of the End-to-End Principle is given in "End-to-End Arguments in System Design," by Jerome Saltzer, David Reed, and David Clark. Originally presented as a conference paper in April 1981, a revised version was published in ACM Transactions in Computer Systems, November 1984, pp. 277-288.

Clayton Christensen's The Innovator's Dilemma was published by Harvard Business School Press (June 1997).

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions