In 1997, Harvard Business School professor Clayton Christensen created a sensation among venture capitalists and entrepreneurs with his book The Innovator's Dilemma. The lesson that most people remember from it is that a well-run business can’t afford to switch to a new approach—one that ultimately will replace its current business model—until it is too late.
One of the most famous examples of this conundrum involved photography. The large, very profitable companies that made film for cameras knew in the mid-1990s that digital photography would be the future, but there was never really a good time for them to make the switch. At almost any point they would have lost money. So what happened, of course, was that they were displaced by new companies making digital cameras. (Yes, Fujifilm did survive, but the transition was not pretty, and it involved an improbable series of events, machinations, and radical changes.)
A second lesson from Christensen’s book is less well remembered but is an integral part of the story. The new companies springing up might get by for years with a disastrously less capable technology. Some of them, nevertheless, survive by finding a new niche they can fill that the incumbents cannot. That is where they quietly grow their capabilities.
For example, the early digital cameras had much lower resolution than film cameras, but they were also much smaller. I used to carry one on my key chain in my pocket and take photos of the participants in every meeting I had. The resolution was way too low to record stunning vacation vistas, but it was good enough to augment my poor memory for faces.
This lesson also applies to research. A great example of an underperforming new approach was the second wave of neural networks during the 1980s and 1990s that would eventually revolutionize artificial intelligence starting around 2010.
Neural networks of various sorts had been studied as mechanisms for machine learning since the early 1950s, but they weren’t very good at learning interesting things.
In 1979, Kunihiko Fukushima first published his research on something he called shift-invariant neural networks, which enabled his self-organizing networks to learn to classify handwritten digits wherever they were in an image. Then, in the 1980s, a technique called backpropagation was rediscovered; it allowed for a form of supervised learning in which the network was told what the right answer should be. In 1989, Yann LeCun combined backpropagation with Fuksuhima's ideas into something that has come to be known as convolutional neural networks (CNNs). LeCun, too, concentrated on images of handwritten digits.
In 2012, the poor cousin of computer vision triumphed, and it completely changed the field of AI.
Over the next 10 years, the U.S. National Institute of Standards and Technology (NIST) came up with a database, which was modified by LeCun, consisting of 60,000 training digits and 10,000 test digits. This standard test database, called MNIST, allowed researchers to precisely measure and compare the effectiveness of different improvements to CNNs. There was a lot of progress, but CNNs were no match for the entrenched AI methods in computer vision when applied to arbitrary images generated by early self-driving cars or industrial robots.
But during the 2000s, more and more learning techniques and algorithmic improvements were added to CNNs, leading to what is now known as deep learning. In 2012, suddenly, and seemingly out of nowhere, deep learning outperformed the standard computer vision algorithms in a set of test images of objects, known as ImageNet. The poor cousin of computer vision triumphed, and it completely changed the field of AI.
A small number of people had labored for decades and surprised everyone. Congratulations to all of them, both well known and not so well known.
But beware. The message of Christensen’s book is that such disruptions never stop. Those standing tall today will be surprised by new methods that they have not begun to consider. There are small groups of renegades trying all sorts of new things, and some of them, too, are willing to labor quietly and against all odds for decades. One of those groups will someday surprise us all.
I love this aspect of technological and scientific disruption. It is what makes us humans great. And dangerous.
This article appears in the July 2022 print issue as “The Other Side of The Innovator’s Dilemma.”
- The Future of the Microprocessor Business - IEEE Spectrum ›
- The New Economics of Semiconductor Manufacturing - IEEE Spectrum ›
- An Inconvenient Truth About AI ›
Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT, where he was director of the AI Lab and then CSAIL. He has been cofounder of iRobot, Rethink Robotics, and Robust AI, where he is currently CTO.