The December 2022 issue of IEEE Spectrum is here!

Close bar

From 2300 transistors on an Intel 4004 chip to the forest of 2 ­billion transistors residing on the ­latest generation of Intel microprocessors, Gordon E. Moore’s famous law has guided the steady shrinking of ­transistors and their consequent ­density on ­microchips. That doubling about every two years is how we went from Pong in 1972 to the astonishing real-time ­rendering of hair that moves according to the laws of real-world physics in the video game Heavenly Sword in 2007.

Enabling such technological ­marvels has been the sharp growth of ­available processing power and the ­commensurate decline of the cost of DRAM over the past 37 years [see timeline]. Though Moore made his ­prediction in 1965—which was 45 years ago—it was the ­concurrent invention of the micro­processor and DRAM in 1971 that sparked the ­complementary arcs shown below. Because the ­semiconductor ­industry advances in ­lockstep, this curve, though based on Intel chips, is likely ­representative of the general trend.

Technological advances like ­personal computers, the Internet, and video games drove these ­complementary slopes of cost decline and ­processing-power growth. But just as in the ­chickenâ''and-the-egg scenario, those slopes were also the driving force behind the consequent ­technological advances. Some might balk at the US $399 price tag of an iPhone, but in 1971, its then strictly ­hypothetical 128 MB of DRAM would have set the ­consumer back about $50 688 in 2008 dollars. In recent years, ­processing power has hit a ­plateau: to continue ramping up performance according to the expectations set by Moore’s Law, companies like Intel and AMD have turned to multicore processing. The future path of this graph, therefore, is not necessarily predictable.

Keep Reading ↓Show less

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
A plate of spaghetti made from code
Shira Inbar

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less