This is part of IEEE Spectrum's special report: What's Wrong—What's Next: 2003 Technology Forecast & Review.
Technological changes affecting telephony are so profound that the end of the large incumbent telephone companies is at hand. They will fail or become unrecognizable, as telephone services are embedded in a more general-purpose network—the Internet.
The U.S. Telecommunications Act of 1996 was designed to cope with these changes in technology, but it has been an utter failure, in large part because it did not apprehend a pivotal architectural change, the End-to-End Principle.
Under this principle, the network does nothing more than deliver bits from point A to point B. The result is a general network that is not constrained by any one application. In contrast, the first communications technologies were designed for special purposes: telephony, television, and data communications, each with its own specialized network. For a new service to become available—touch-tone dialing, for example, or caller-ID—the specialized switching equipment in the telephone central offices and substations had to be reprogrammed or even replaced.
Zeroing in on the endpoints
The original design of the Internet was not end-to-end, either. It followed older design principles born in an age of expensive computing logic. But as the power of microprocessors doubled again and again and again, the ability of the devices at the endpoints to run sophisticated communications protocols, such as error correction, file and mail transport, and encryption, grew.
Without this radical reduction in the cost of computing, the machinery to run communications protocols would be so expensive that they would have to remain at the center of the network, controlled by a centralized authority. The alternative, an end-to-end network with the equivalent of mainframe computers at the endpoints, would have been prohibitively expensive. Instead, thanks to relentless iteration of Moore's Law, network endpoints today are small, powerful, inexpensive devices, such as PCs, home routers, and media hubs. With such power in the endpoints, the need to embed the functions of a network in the network's core shrank.
The consequences of Moore's Law on network architecture weren't immediately obvious. But in the early 1980s, three leading Internet network architects, Jerome Saltzer, David Reed, and David Clark, proposed that if functions like error-checking could be implemented at the endpoints of the network, the Internet would be more flexible, and a wider variety of applications—even unknown future applications—could be more easily implemented.
The logical conclusion of this argument was the separation into different protocols of functions that had hitherto been bundled together. The first important instance of this new, layered architecture was the separation of the error-checking protocol (the transmission control protocol, or simply TCP) from the Internet protocol (IP). Today, at the IP layer, the network simply looks at packets to see where they need to go. If strong error-checking is needed for an application such as a file transfer, the receiving device will use TCP to examine incoming packets and, if it finds errors, ask the sending machine to retransmit. After all, losing even a single bit of data in a spreadsheet, for example, is a disaster.
It turns out that some applications, though, don't need or want strong error-checking. Voice is one—a few data errors don't significantly impede our ability to hear and understand another person, whereas the delays involved in retransmission of bad data can make fluent conversation impossible. Without the separation between TCP and IP, the Internet would not have been suitable for applications that required weak error-checking.
In other words, without the end-to-end property of the Internet, without the ability to use different error-checking protocols like UDP (user datagram protocol, intended for transmission of small, unsequenced data units) and RTP (real-time protocol, designed specifically to relax error-checking) in place of TCP, applications like Internet telephony and online gaming would be awkward to implement and prohibitively expensive. But because Internet error-checking occurs in the endpoints, programmers are free to substitute protocols that are looser or tighter as their application demands.
The proponents of the end-to-end principle did not envision Internet telephony; they only knew that a network with less, rather than more, functionality in the middle would be more flexible, and more amenable to as-yet-undreamt-of applications.
The Internet, the world's overarching end-to-end network, is now the connectivity medium of communications. Yet telephone company networks are still centralized networks designed for a single application, voice. Phone companies still make more money from voice than from other network traffic, even though the volume of data traffic now exceeds voice. Furthermore, Internet voice is getting better and better—its quality can, in fact, far exceed the "toll quality" voice of plain old telephone service.
In addition, smart end devices can set up and manage telephone calls far better than a centralized network. (Why dial a number when you can double-click on it?) In fact, when voice is implemented in end devices, the ability to mix it into other kinds of interactions—online game play, collaboration, mutual Web surfing, and many more yet to be discovered—the idea of a "call" as a special, discrete event could well disappear.