The Death of Moore’s Law Will Spur Innovation
As transistors stop shrinking, open-source hardware will have its day
Companies that produce open-source hardware are few and far between. At least, they are if you define them in the usual way: an enterprise that provides documentation and permission sufficient for others to re-create, modify, improve, and even make their own versions of the devices it sells. And although open hardware has made strides in recent years—including an increasing number of companies adhering to these practices along with the establishment of the Open Source Hardware Association—it remains a niche industry.
You might guess the reason to be simple—such companies must be set up and run by idealists who lack any hardheaded business sense. Not true! What’s held back the open-source hardware movement is not a lack of business acumen; it’s the rapid evolution of electronic technology.
The reasons for this are subtle, but as I will explain below, swift advances in electronic technology inherently favor large “closed” businesses at the expense of small teams or individual innovators, who benefit most by working with open systems. At least that’s the way things have been. But there are changes coming that I expect will tilt the balance the other way.
The relevant shifts are fundamentally tied to the pace of microelectronic miniaturization, which for decades has been well described by Gordon Moore’s eponymous law. So it seems fitting that on the 50th anniversary of his seminal publication on the topic, we pause to consider how the impending end of Moore’s Law—or at least its slowing—could, in fact, be tremendously beneficial to many manufacturers and consumers. But before I present this contrarian forecast, I should review for younger readers a little bit about how we got where we are.
In the beginning, there was the vacuum tube. And just about all the hardware based on it was what today we would call open. Early consumer electronic products, such as radios and television sets, often shipped with user manuals that contained full schematics, a list of replacement parts, and detailed instructions for servicing. Very little changed when the transistor was first introduced.
Indeed, into the 1980s, computers often came with schematic diagrams of their circuit boards. The Apple II, for example, shipped with a reference manual that included a full schematic of the computer’s main board, an artifact that strongly influenced me to pursue a career in electronic design.
Modern user manuals lack such depth. The most complex diagram in a recent Mac Pro user manual instructs the purchaser on how to sit at the computer: “lower back supported,” “thighs tilted slightly,” “shoulders relaxed,” and so forth.
What happened? Did electronics just get too hard to service or improve?
Not really. In fact, improving electronic products has become too easy—particularly for the system integrators. For decades, they have, in essence, been able to sit and wait for the ICs populating their circuit boards to get better rather than put in the hard work needed to hone their existing product designs. For example, throughout the 1990s and into the new millennium, programmers were encouraged to abandon hand-optimized assembly language in favor of cramming in more features using ever higher-level languages. Snappy performance, if it wasn’t there on release day, would come soon enough with the next generation of CPUs.
You can see this effect clearly if you graph the “goodness” of electronic gadgets over the years. Pick virtually any metric—performance, feature set, whatever—and arrange your graph so that the plotted parameter doubles every 18 months following Moore’s Law. But do that on a chart with a linear vertical axis. Most diagrams depicting Moore’s Law use a logarithmic vertical scale, which flattens the curve’s sharp upward trend into a much more innocuous-looking straight line.
Slow it down: The gains a small company could reasonably make (black line, a 75-percent-per-year, noncompounding rate) can’t compete with Moore’s Law (red line) if the doubling time is just18 months (left). But a doubling time of 36 months (right) provides ample opportunity (yellow).
Now compare the sharply climbing Moore’s Law curve with what might come from a design team at a small-scale manufacturer working diligently to improve its latest product by polishing the firmware or tweaking the way memory and other components are connected. Assume that such a team can achieve a constant but respectable rate of progress. Let’s say the first year of such work improves the product by 75 percent over its initial performance, and that improvements continue to accrue by this same amount year after year. How do the results compare to what the design team gains just by sitting and waiting for Moore’s Law to do its magic?
Constant Improvement vs. Moore’s Law
Use this interactive graph to see how the opportunity for small-scale innovators varies with changes in the rates of linear and exponential change.
Illustration: David Schneider
Not well. Indeed, if you plot both lines, linear improvement versus exponential Moore’s Law curve, you’ll see only a tiny sliver of opportunity for diligent work on the part of a small-scale manufacturer to make for a better gizmo. And that opportunity expires within two years of product launch, because at that point higher-performance chips become available, requiring the company to create an entirely new product that uses them lest it lose out to the competition.
The two curves I’ve just described highlight the central challenge that small innovators have faced during the past few decades. Almost universally, it has been more rewarding for them to sit and wait rather than innovate. In particular, if it takes two years to conceive and implement changes to a design that doubles its performance, you and your customers are better served by doing nothing and upgrading to the latest available ICs two years down the road. It’s been a Sisyphean exercise for the engineers at many small businesses to try to race against Moore’s Law.
Indeed, the exponential growth of Moore’s Law works against them while it favors large businesses, which have the resources to work on three or four generations of products simultaneously. And even for large companies, this is hard.
The race manufacturers are in to keep up with the pace of Moore’s Law has several pernicious effects. The most relevant one here is that the small edge (of perhaps a few months’ time) created by keeping a design proprietary and forcing the competition to reverse engineer your products constitutes a significant advantage, one that manufacturers are understandably hesitant to squander. Their reluctance to give up even the slightest edge on the competition helps explain why they rarely share schematics, code, or other technical details about their products.
Thankfully, Moore’s Law is slowing, and the dynamics I’ve just described are changing.
Before I say how, though, let me defend my assertion that Moore’s Law is slowing. Lots of analysts and commentators have warned recently that the era of exponential gains in microelectronics is coming to an end. But I do not really need to hang my argument on their forecasts. The reduction in size of electronic components, transistors in particular, has indisputably brought with it an increase in leakage currents and waste heat, which in turn has slowed the steady progression in digital clock speeds in recent years. Consider, for example, the clock speeds of the various Intel CPUs at the time of their introduction. After making dramatic gains, those speeds essentially stopped increasing about a decade ago.
Since then, CPU makers have been using multicore technology to boost performance, despite the difficulty of implementing such a strategy. [See “ The Trouble With Multicore,” by David Patterson, IEEE Spectrum, July 2010.] But engineers didn’t have much of a choice: Key physical limits prevented clock speeds from getting any faster, so the only way to use the increasing number of transistors Moore’s Law made available was to create more cores.
Transistor density continues to increase exponentially, as Moore predicted, but the rate is decelerating. In 1990, transistor counts were doubling every 18 months; today, that happens every 24 months or more. Soon, transistor density improvements will slow to a pace of 36 months per generation, and eventually they will reach an effective standstill.
When will that be? The short answer is, nobody really knows. But one study suggests that it will stop at an effective gate length of about 5 nanometers sometime around 2020 or 2030. Five nanometers is about the space between 10 silicon atoms, so even if this guess is wrong, it can’t be wrong by much.
Illustration: Serge Bloch
The implications are profound (at least for the kind of person who reads this magazine). Someday in the foreseeable future, you will not be able to buy a better computer next year. The flash drive you purchase next will cost the same and store the same number of bits as the one you’re replacing. And you’ll have to stop looking forward to your next phone being more powerful and doing more amazing things than your last one.
Rather than expecting that within a few years of its purchase you’ll just throw out whatever electronic gizmo you buy, you’ll be anticipating keeping it for a good long while.
Under such a regime, you’ll probably want to purchase things that are more nicely made to begin with. The idea of an “heirloom laptop” may sound preposterous today, but someday we may perceive our computers as cherished and useful looms to hand down to our children, much as some people today regard wristwatches or antique furniture.
This slowing of Moore’s Law portends a bright future for many small businesses—and likewise for open-hardware practices. To see why this is the case, let’s revisit the comparison I made earlier between Moore’s Law exponential growth and a linear (75 percent, noncompounding) rate of technology improvement. But this time, consider a new Moore’s Law scenario, with a doubling time of 36 months instead of 18 months.
Were you to plot those two lines, you would find that linear improvement can go on for more than eight years before it gets overtaken by the 36-month Moore’s Law curve. And there is a point at around year two or three after product launch when a company would have a substantially better product if it was incrementally optimizing its design all along rather than just waiting for Moore’s Law to operate. In other words, there is a genuine market window for profiting from innovative improvements made at a pace that the engineers at small businesses can manage.
Another benefit you can expect as Moore’s Law decelerates is a greater standardization of platforms. A decade ago, it would have been ridiculous to create a standard tablet or mobile-phone chassis with interchangeable components—the underlying technology was evolving too quickly. But this has now become a reasonable proposition. [See “ Project Ara: Google Wants Your Phone to Go to Pieces,” Spectrum, January 2015.]
The creation of stable, performance-competitive open platforms will be enabling for small businesses. Such companies can still choose to keep their designs closed, but by doing so they will be forced to create a proprietary infrastructure to support the development of their products and to build on them. Many of those companies will find that they are wasting needless amounts of time and energy working on generic hardware—time that they could have spent refining the parts of their designs that make their products special. So more and more companies will choose to work with open hardware, just as many businesses have already done with open software.
Another change I foresee is that field-programmable gate arrays (FPGAs) may find themselves performing respectably compared with their hard-wired CPU kin. One reason is simple: As transistors shrink to atomic scales, the flaws that arise during fabrication are bound to become more common. With a CPU, such flaws can easily ruin the whole device. But with an FPGA, you can arrange the physical layout of your circuits to avoid small-scale imperfections.
Another reason to expect a bright future for FPGAs comes from a consideration of the relative difficulty of configuring their circuits. That exercise invariably requires a hardware-description language such as Verilog. So it resembles software programming—although it’s typically more challenging. But the only real alternative involves programming multicore CPUs to eke out better performance from the massive parallelism they offer, and that is already difficult, and it promises to get even harder as chips with more and more cores are released.
So in the future, programming multicore processors and configuring FPGAs could reach parity in terms of the effort required. Should that come to pass, many more gadgets will surely be built with FPGAs. And when open-hardware companies switch to using FPGAs instead of CPUs, they will (by the very definition of “open”) share their hardware-description-language files, too. Others will then be free to reconfigure the circuitry, down to individual gates inside the FPGA. So the open-hardware movement could penetrate microelectronic design to a very deep level.
Another welcome change I see coming is a rise in repair culture as technology becomes less disposable and more permanent. Replacing worn-out computer parts five years from their purchase date won’t seem so silly when the replacement part has virtually the same specifications and price as the old one. This change in the keep-or-throw-away calculus will create a demand for schematics and spare parts, which in turn will facilitate the growth of open-hardware ecosystems and businesses.
The deceleration of Moore’s Law is already showing its effect in markets that are not particularly sensitive to performance. Consider, for example, the Arduino microcontroller platform. [See “The Making of Arduino,” Spectrum, November 2011.]
The performance demands of Arduino users (for education, physical computing, and simple embedded-control applications) have not grown appreciably, and thus the platform can be very stable. Indeed, many Arduino boards have used virtually the same hardware since 2005. This stability in turn has enabled the Arduino to grow deep roots in a thriving user base, one that enjoys open standards for hardware add-ons.
Another example is the shanzhaiphenomenon [pdf] in China. The shanzhai, often dismissed as “pirates,” are typically small businesses that rely on blueprints shared within their community to build low-end mobile phones. The market for these phones is largely insensitive to absolute levels of performance and thus to improvements in CPU technology. So the shanzhai get to use essentially the same core chipset for many years without compromising the competitiveness of their final products. This stability in turn affords these small, agile innovators time to learn the platform thoroughly and to produce riff after riff on the same theme. You may fault them for flouting intellectual-property laws, but you have to admit that they often achieve astonishingly creative results on a shoestring budget.
The scene is set, I believe, for many other kinds of open-hardware ecosystems to blossom. The inevitable slowdown of Moore’s Law may spell trouble for today’s technology giants, but it also creates an opportunity for the fledgling open-hardware movement to grow into something that potentially could be very big.
Personally, I’m looking forward to the changes—including the return of artisan engineering, where elegance, optimization, and balance are valued over raw speed and feature creep. Even if Moore’s Law ends soon or abruptly, electrical engineers and consumers alike should learn to stop worrying and prepare to love what’s in store.
Material in this article originally appeared in the author’s blog post “Why the Best Days of Open Hardware Are Yet to Come.”
This article originally appeared in print as “Moore’s Law is Dying (and That Could Be Good).”