I’ve been hearing about the impending end of Moore’s Law for so many years that I’ve become skeptical of all the claims of doom. Like the Little Engine That Could, Moore’s Law keeps chugging along. Nonetheless, it has definitely reached the huffing and puffing stage.
I was considering upgrading my desktop with a new CPU and motherboard, but new, comparably priced CPUs have about the same clock speed as my 4-year-old model. The newer ones do have more transistors and better architectures, so technical benchmarks show about a 50 percent improvement. Nonetheless, when it comes to everyday applications, the newer models might not exhibit noticeably better performance. I’m disappointed because I feel compelled to have the latest stuff at all times.
While transistors are continuing to shrink, it’s at a slower pace. The technology road map calls for 5-nanometer fabrication by about 2020, but since we can’t run those transistors faster—mostly because of heat dissipation problems—we will need to find effective ways of using more transistors in lieu of increasing clock speed. And because of increasing fabrication costs, these designs will have to be produced at high volume.
No one knows what electronics will be like in the future. It’s hard to think beyond Moore’s Law. Since the time of the vacuum tube, there has been a century of exponential improvement. When I was a child, I thought that all future designs would simply be different arrangements of tubes, resistors, and capacitors. How little I knew! I’m sure that today’s budding engineers will feel the same way in the future.
Maybe they will be tinkering with carbon nanotubes, but whatever it is, the huffing and puffing will go on. The little engine will still be climbing the hill.
Meanwhile, I see electronics design as riding a series of waves. For maximum professional opportunity, we just need to find where the big waves are, go there, and enjoy the ride. Right now the biggest waves are to be found in the world of cellphone electronics. As cellphone technology matures and plateaus, we have an enormous reserve in all the meticulously designed, high-volume components that make up our smartphones. Since smartphones do just about everything, there is a bonanza in that junk pile of parts. (Consider that the current renaissance in virtual and augmented reality was built on the back of light, cheap displays created for smartphones.)
If it can’t be based on repurposed cellphone tech, the next approach is to sidestep the fading improvements in general-purpose CPUs and adopt an application-specific architecture. But this is a high-stakes game, and it’s necessary to find a big wave to ride. Being out by yourself, waiting for a tiny wave, won’t do. At this point everyone is racing to join all the surfers at the machine-learning beach. Instead of CPUs, it’s all about TPUs—the tensor processing units that do the batches of flowing matrix multiplications that implement deep learning in neural-network architectures.
On the other hand, if the volume isn’t there for a particular application, you can hedge your bets with a field-programmable gate array. FPGAs enable incremental design improvements or adaptations to changes in requirements. Hardware is essentially morphed into software—or the other way around.
So hurry and get in while the water’s warm. Maybe there’s a big wave coming.
This article appears in the March 2018 print issue as “Riding the Wave of Electronics.”