Don’t panic if you look for your computer today and can’t find it. We have it on the authority of technology maven Ray Kurzweil that this year computers will have vanished because of miniaturization. As he said at the TED conference in February 2005:
By 2010 computers will disappear. They’ll be so small, they’ll be embedded in our clothing, in our environment. Images will be written directly to our retina, providing full-immersion virtual reality, augmented real reality. We’ll be interacting with virtual personalities.
If you have a different impression of the world today, Kurzweil would want you to know that he is technically correct. If the rest of the world fails to think that’s enough, the rest of the world is wrong.
Of course, Kurzweil did not mean to say that all computers would actually disappear. Rather, embedded microprocessors would allow many of the functions once uniquely served by computers to disseminate to phones, tablet computers, and even cars, clothes, and key chains. And in that sense, 2010 might indeed be seen as a ringing vindication of Kurzweil’s prophecy, because smartphones and iPads are everywhere.
But a moment’s reflection reveals that expansive interpretation of Kurzweil’s remarks to be, at bottom, insipid. Here’s why: Many of those same devices were already popular commercial products in 2005. Stylus-based computer interfaces have been around since at least the 1980s. Microsoft introduced pocket and tablet versions of Windows in 2000 and 2001. Smartphones and PDAs emerged in the mid-1990s. Handspring brought out the Palm OS Treo in 2002. The RIM BlackBerry smartphone also came out in 2002.
So by Kurzweil’s soft definition, the computer had already disappeared when he was on stage in 2005. In fact, much of his audience may have had its replacements in their pockets. If his rhetoric about computers disappearing by 2010 isn’t meant to be taken literally, then essentially all that’s left is the claim that smartphones and other digital devices would get smarter, smaller, and more popular, which would not win any prizes for sagacity.
Therein lie the frustrations of Kurzweil’s brand of tech punditry. On close examination, his clearest and most successful predictions often lack originality or profundity. And most of his predictions come with so many loopholes that they border on the unfalsifiable. Yet he continues to be taken seriously enough as an oracle of technology to command very impressive speaker fees at pricey conferences, to author best-selling books, and to have cofounded Singularity University, where executives and others are paying quite handsomely to learn how to plan for the not-too-distant day when those disappearing computers will make humans both obsolete and immortal.
Ray Kurzweil’s genius is beyond dispute. He has been awarded the National Medal of Technology, a Lemelson-MIT Prize, and a raft of other international accolades and honorary degrees. He is in the National Inventors Hall of Fame in the United States. In high school he wrote software that could compose music in the style of classical composers (an achievement that earned him an appearance on the TV game show “I’ve Got a Secret” in 1965). He invented the first optical scanner capable of interpreting writing in any typeface, then directed the further development of the first CCD flatbed scanner and text-to-speech synthesizer so that he could build the Kurzweil Reading Machine for the blind. He has developed commercial speech recognition systems used around the world, founded a number of companies and started a hedge fund.
And yet, while garnering honors for his brilliance, Kurzweil has also become famous (or notorious) for his views on the technological future, which he has outlined in the best-selling books The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999), and The Singularity Is Near (2005). In brief, they describe his discovery of a “law of accelerating returns” that governs technological progress. Computer intelligence and other technologies will evolve exponentially fast, he says, bringing true artificial intelligence, human immortality, and fantastic nanoengineering capabilities within a very few decades. Within the century, they will push history to a technological singularity literally beyond imagination.
Kurzweil is confident, for instance, that by 2029 researchers, having reverse engineered the human brain, will build an AI that can pass as human. (He has a US $20 000 bet to that effect with computing pioneer Mitchell Kapor riding at the Long Bets Web site.) Neuroscientists, AI researchers, and others have objected that no one today has more than the faintest idea of how to accomplish these feats and that his time line is highly unrealistic. Kurzweil dismisses all such objections: The obstacles will undoubtedly melt away in the face of Moore’s Law and the unstoppable acceleration of technology.
In his talks, Kurzweil says he began studying rates of technological change because he realized that a major reason tech businesses fail is not that they fail to build what they intend but that their timing is wrong: By the time their innovations come to market, the opportunity is past, having become irrelevant or having been seized by someone and something else. To help spread the gospel of accelerating returns, Kurzweil and entrepreneur Peter Diamandis established the Singularity University, in California, which offers 9-day executive training sessions (for $15 000) and 10-week graduate studies (for $25 000) on how to understand and master exponentially advancing technologies.
All these enterprises ride on the credibility of Kurzweil’s vision. Since he began publishing his predictions 20 years ago, it is worth assessing their accuracy to date. Unfortunately, scoring Kurzweil’s prophecies turns out to be a difficult and contentious exercise.
Ray Kurzweil’s assessment of his own 108 predictions from his 1999 book, The Age of Spiritual Machines
89 “Entirely correct” (by end of 2009)
13 “Essentially correct” (realized within just a few years)
3 “Partially correct”
2 “10 years off”
1 “Just wrong”
Paeans to Kurzweil’s oracular prowess often begin by noting that in his 1990 book, The Age of Intelligent Machines, Kurzweil predicted the rise of the Internet as a medium for public communications, commerce, education, and entertainment. “By early in the next century, personal computers will be portable laptop devices containing cellular phone technology or wireless communications with both people and machines. Our portable computers will be gateways to international networks of libraries, data bases, and information services…” he wrote.
The World Bank estimates that in 1990 only about 2 million people had Internet access. By 2000 that number had grown to 124 million with the transformative and all-consuming rise of the World Wide Web. On those grounds, Kurzweil’s prediction might seem to be a gem. Nevertheless, some facts scuff its luster.
The first is that to see, in 1990, a society using networked computers for everyday tasks, you didn’t need to be prophetic. You just needed to be French. France’s government began issuing dumb terminals to telephone subscribers for free in 1981 to encourage use of the paid Minitel online information, or videotex, service. Minitel allowed users to look up phone numbers, purchase train and airline tickets, use message boards and databases, and purchase items through mail order.
“The French use almost three million computer terminals to perform such tasks as looking up phone numbers electronically and communicating with strangers over what has been dubbed the ‘electronic singles bar,’” notes a 15 September 1987 article by Andrew Pollock in The New York Times.
That same article began by lamenting that, in the United States, “The vision of an electronic society in which consumers read the news, pay bills and make airplane reservations on their home computers has proved illusory.” In other words, three years before Kurzweil’s book, some people had not only imagined an online society but had already questioned whether it could catch on in the United States. Even so, by the late 1980s, CompuServe, GEnie, Prodigy, Dow Jones News/Retrieval, and other commercial services were offering e-mail, conversation, information, and entertainment to subscribers with personal computers who were willing to pay up to $12 per minute for the privilege. The Videotex Industry Association estimated that in 1987, 40 such online services were serving 750 000 consumers. In addition, hundreds, perhaps thousands, of noncommercial electronic bulletin boards also served various interests over the phone lines.
What kept these services from mainstream success was high costs and technical difficulties. Pollock’s article quotes Gary Arlen, publisher of the industry newsletter Interactivity Report, as saying that videotex “is languishing, and everyone is sort of waiting for the next breakthrough.”
That breakthrough would of course be the invention of the World Wide Web, which Tim Berners-Lee proposed in 1989 and which debuted publicly in December 1993. The Web made the Internet easier, cheaper, and more adaptable for more users. But the early-adopting portion of the public had demonstrated an appetite for online services years earlier.
Popular culture in the late 1980s was also not short on visions of a heavily computerized, network-linked society. Most of these owed a debt to William Gibson’s hit 1984 novel Neuromancer, a seminal work of “cyberpunk” fiction that popularized the term “cyberspace.” Rather famously, Gibson has said he didn’t know anything about computers when he wrote Neuromancer, so his vision didn’t come from any remarkable insight into the technology. He was simply picking up on ideas that were already abroad in films such as Bladerunner and Tron from 1982, and in such novels as Bruce Sterling’s 1988 award-winner Islands in the Net and the 1989 Japanese manga series Ghost in the Shell.
The fact that many sources anticipated Kurzweil’s prediction of a vigorous online society does not discredit it. But the praise that congratulates him for the originality of the idea implicitly obscures all those others who did it before him.
Kurzweil makes lots of predictions. He really hit his stride in The Age of Spiritual Machines, in 1999, which included specific claims about what life would be like in 2009. (And that’s just for openers: The book offers scenarios by decades up through 2099 and then speculates about how intelligences will contemplate the universe millennia from now.)
Kurzweil got a lot right. But in instance after instance, his unambiguously correct statements are wedded to others that sound close to reality…but are also somewhat off. They’re like descriptions of the world as seen through a fish-eye lens.
“It is now 2009. Individuals primarily use portable computers, which have become dramatically lighter and thinner than the notebook computers of ten years earlier. Personal computers are available in a wide range of sizes and shapes,” he wrote, and if he had ended the sentence there, surely no one would disagree. But instead he continues: “—and are commonly embedded in clothing and jewelry such as wristwatches, rings, earrings, and other body ornaments. Computers with a high-resolution visual interface range from rings and pins and credit cards up to the size of a thin book.” And: “People typically have at least a dozen computers on and around their bodies, which are networked using ‘body LANs’ (local area networks).”
“So far, I haven’t seen Kurzweil straight-up admit that he was wrong. I think he would benefit from doing so on some of these points”
Michael Anissimov, Accelerating Future blog
Is that all true? Accept for now that smartphones, music players, and even chip-enabled credit cards should all count as computers because they contain a microprocessor, and that they can even be loosely called jewelry, clothing, or body ornaments. Even so, how many of us have more than a dozen of these “computers” on our persons? Beyond a Bluetooth-coupled phone and earpiece, how many are in any sense networked together? How many sport a “high-resolution visual interface”?
Or consider what Kurzweil wrote about education. He correctly projects that technology will play a much larger role in the classroom and that distance learning and teaching software will trend upward. But he also asserted that students would own and use computers that weigh less than a pound, with which they would interact primarily by voice and stylus. Teachers would “attend primarily to issues of motivation, psychological well-being, and socialization,” while software handles instruction. Is this a recognizable, accurate description of schools today?
He also seems to have had high hopes a decade ago for the antitumor compounds called angiogenesis inhibitors. His footnotes direct attention to a front-page New York Times story from 3 May 1998 that is notorious in science-writing circles for having grossly overhyped the promise of the research. In his book’s formal discussion, Kurzweil merely suggests that angiogenesis inhibitors would help to reduce cancer. Yet in a puckish chapter where Kurzweil chats with a fictional interviewer from the future, he has her say that his prediction was “actually quite understated. Bio-engineered treatments, particularly antiangiogenesis drugs…have eliminated most forms of cancer as a major killer.” To which Kurzweil replies, “Well, that’s just not a prediction I was willing to make.” Talk about having it both ways.
It seems only fair to allow some latitude for interpretation on the dates. But even then, it is hard to define the rightness or wrongness of Kurzweil’s predictions.
Kurzweil himself has no such difficulty, however. He knows precisely how well he’s doing. Last January, Michael Anissimov of the Accelerating Future Web site posted an item in which he suggested that seven of Kurzweil’s predictions for 2009 seemed to be wrong. Kurzweil replied with a note that argued it was wrong to single out merely seven predictions when he had actually made 108 in The Age of Spiritual Machines.
“I am in the process of writing a prediction-by-prediction analysis of these, which will be available soon and I will send it to you,” he wrote. “But to summarize, of these 108 predictions, 89 were entirely correct by the end of 2009.” Another 13 were “essentially correct,” by which he meant that they would be realized within just a few years. “Another 3 are partially correct, 2 look like they are about 10 years off, and 1, which was tongue in cheek anyway, was just wrong,” he wrote. So by his own scoring, he is at least 94.4 percent accurate.
Kurzweil has not yet released that analysis of his track record, so it is hard to know how some of his predictions for 2009—the adoption of intelligent highways and self-piloting cars, sharp reductions in cancer, and continuous economic growth for the United States and the stock market through 2019, for example—fit into his tally. Maybe one of those was meant tongue in cheek, or maybe he doesn’t regard them as real predictions; otherwise, it seems as though he regards all of them as at least partly or imminently correct. Judge for yourself.
Based on Kurzweil’s defenses of the items that Anissimov had questioned, however, his analysis seems unlikely to satisfy his critics. For instance, Kurzweil stood by his assertion that in 2009, 3-D chip architectures would be common. “Many if not most semiconductors fabricated today are in fact 3D chips, using vertical stacking technology,” he wrote. “It is obviously only the beginning of a broad trend, but it is the case that three-dimensional chips are commonly used today.”
Actually, 3-D integrated circuits are currently very much a niche product, with limited uses in DRAM, image sensors, and a few other applications. A market survey by Yole Développement, in Lyon, France, from 2008 projected that by 2015 3-D devices would represent about 25 percent of the memory market and only about 6 percent of the rest of the semiconductor market. Kurzweil is surely right that 3-D chips will become widespread within another few years, but it is simply wrong to insist that they already are.
Kurzweil also stands by his claim that computer displays built into eyeglasses would project images into users’ eyes because some such systems do exist, and says, “The prediction did not say that all displays would be this way or that it would be the majority, or even common.” Similarly, he defends his claim that translation software would be “commonly used” to allow people speaking different languages to communicate by phone by pointing to smartphone apps that emerged at the end of 2009. He allows that one could quibble about how “common” their use is.
“So far, I haven’t seen Kurzweil straight-up admit that he was wrong. I think he would benefit from doing so on some of these points,” says the blog post by Anissimov, who seems to admire the man but thinks futurists should be accountable for their statements.
Kurzweil’s reply asserts that he is all for futurist accountability, “but such reviews need to be free of bias, fair, and not subject to selection bias and myopic interpretations of both the words used and the current reality.” Still, it is hard to square his objection to “myopic,” literal interpretations with his lawyerly defenses of his predictions that hinge on their precise wording and creative interpretations of the meaning of everyday words.
Kurzweil is extremely well informed about technologies in development and highly insightful about how they can feed into one another, particularly over the relatively near term. He is very good on trends, and his predictions are thought provoking. For the people who pay to hear him speak or to read his books, perhaps that is enough.
On the other hand, if Kurzweil is right that a failure to understand the timing of technological change is a major reason that businesses fail, then let’s hope that nobody listening to Kurzweil takes his predictions at strict face value. Anyone who was encouraged 10 years ago to hit the market during the ’90s with products or services contingent on cybernetic chauffeurs or widespread real-time speech translation could be in trouble.
Nevertheless, his unwavering confidence in the law of accelerating returns allows him to shrug off contradictory facts and perspectives as mere temporary inconveniences. A year here, a decade there: The accelerating returns of technology will sweep them all away en route to a singularity beyond human imagination ruled by one eternal truth—that Ray Kurzweil was, is, and always will be right.
At least 94.4 percent of the time, anyway.
Editor’s comment, 30 December 2010: Mr. Kurzweil’s objection to John Rennie’s critique begins with, and makes much of, the use of the word “foolproof.” In fact, that word was never used by Mr. Rennie. In an editing error, the word “foolproof” was inserted into a sentence in Mr. Rennie’s article, after Mr. Rennie had reviewed the editing. IEEE Spectrum regrets this lapse.
About the Author
John Rennie was the editor in chief of Scientific American from 1994 to 2009. He now writes, blogs (The Gleaming Retort, http://blogs.plos.org/retort), and teaches journalism at New York University. He lives in New York City with his Emmy-winning video editor wife and their “swell dog,” Newman. He is an occasional television commentator and a black belt in Kenshikai Karate, which is “a small Japanese style that none of your readers will have heard of.”