Illustration: Eddie Guy
In the most recent season of the TV comedy “Silicon Valley,” lead character Richard Hendricks is obsessed with trying to create his dream vision of a decentralized Internet. This theme did not go unnoticed in the print media, as a number of publications, including IEEE Spectrum, noted that there do exist ongoing efforts to build a decentralized Internet. This provoked me to consider the relative advantages of centralization and decentralization.
Until the 1980s, most communication networks were relatively centralized. In the United States, AT&T had a monopoly that was regulated by the government. Connections were dedicated, point-to-point channels, and the attachment of foreign devices (meaning anything not manufactured by AT&T’s Western Electric Co. subsidiary) was forbidden. I remember with embarrassment my testimony at the federal antitrust trial, trying halfheartedly to justify AT&T’s fallback strategy of allowing digital connections to such devices via a Western Electric “data coupler.”
The antitrust trial resulted in the dismantling of AT&T. At the same time, the rise of the Internet was causing centralized communication networks to unravel. The Internet was designed to be open and decentralized. Packet switching ensured robustness, and the Internet Protocol enforced open and universal interconnection. The end-to-end principle—whereby the network’s job is simply to move data from one point to another while making the existence of all the points in between as invisible as possible—became a guiding philosophy of network transparency.
After almost a half century, the plumbing of the Internet is still decentralized, but the overlay of the World Wide Web is not. A handful of giant companies
has virtual monopoly control of traffic and commerce. And governments censor the Web, control access, and are one of many sources of surveillance. But, I ask myself, Is Hendricks’s peer-to-peer network the answer to these ills?
We technologists have some powerful tools to create distributed systems, including mesh networks, peer-to-peer protocols, cryptography, and blockchains. Ongoing efforts like blockchain-based Ethereum offer exciting potential. One review asked readers to imagine “facebook without Facebook, uber without Uber.” This is possible, but it does seem rather unlikely.
In a way, we’ve been there, done that. Napster was a peer-to-peer network not unlike Hendricks’s. For a while it was a supernova on the Net, but it had a central directory and thus someone to sue, and is no more. BitTorrent, however, also employs a peer-to-peer protocol and has been widely used for quite a few years.
Richard Hendricks envisions his peer-to-peer network storing data on a distributed swarm of cellphones. I’ve been wondering about this. Do I want my data stored on random cellphones? I’d have no problem if it were transient data or entertainment files, but what about my family photos? Ten years from now, when the photos might be wanted, all those cellphones would be gone. The data would have probably been erased years earlier. The problem with systems implemented with peers is that no one is responsible. Whom do you call when your mesh network is out? Or who, ultimately, stands behind a blockchain?
On the other hand, distributed architectures have an engineering appeal. They offer organic growth and resilience and are a cradle for democratic usage and control. Sometimes having no one in charge can be a good thing. However, as legal scholar Larry Lessig has pointed out, regulation is achieved through four means: law, norms, market, and architecture. We engineers usually control only one of these, and that’s something we should always keep in mind.
This article appears in the November 2017 print issue as “The Lure of Decentralization.”