The July 2022 issue of IEEE Spectrum is here!

Close bar

Fundamental Energy Transitions Can Take a Century

Electricity’s benefits were obvious, but it still took a lifetime to dominate

3 min read
A photo of Thomas Alva Edison standing in front of a dynamo.
Thomas Alva Edison poses with his dynamo, which he used to generate electricity for lighting.
Oxford Science Archive/Print Collector/Getty Images

One hundred and forty years ago, Thomas Edison began generating electricity at two small coal-fired stations, one in London (Holborn Viaduct), the other in New York City (Pearl Street Station). Yet although electricity was clearly the next big thing, it took more than a lifetime to reach most people. Even now, not all parts of the world have easy access to it. Count this slow rollout as one more reminder that fundamental systemic transitions are protracted affairs.

Such transitions tend to follow an S-shaped curve: Growth rates shift from slow to fast, then back to slow again. I will demonstrate this by looking at a few key developments in electricity generation and residential consumption in the United States, which has reliable statistics for all but the earliest two decades of the electric period.


In 1902, the United States generated just 6 terawatt-hours of electricity, and the century-plus-old trajectory shows a clear S-curve. By 1912, the output was 25 TWh, by 1930 it was 114 TWh, by 1940 it was 180 TWh, and then three successive decadal doublings lifted it to almost 1,600 TWh by 1970. During the go-go years, the 1930s was the only decade in which gross electricity generation did not double, but after 1970 it took two decades to double, and from 1990 to 2020, the generation rose by only one-third.

As the process began to mature, the rising consumption of electricity was at first driven by declining prices, and then by the increasing variety of uses for electricity. The impressive drop in inflation-adjusted prices of electricity ended by 1970, and electricity generation reached a plateau, at about 4,000 TWh per year, in 2007.

The early expansion of generation was destined for industry—above all for the conversion from steam engines to electric motors—and for commerce. Household electricity use remained restrained until after World War II.

Household electricity use remained restrained until after World War II.

In 1900, fewer than 5 percent of all households had access to electricity; the biggest electrification jump took place during the 1920s, when the share of dwellings with connections rose from about 35 percent to 68 percent. By 1956, the diffusion was virtually complete, at 98.8 percent.

But access did not correlate strongly with use: Residential consumption remained modest, accounting for less than 10 percent of the total generation in 1930, and about 13 percent on the eve of World War II. In the 1880s, Edison light bulbs (inefficient and with low luminosity) were the first widely used indoor electricity converters. Lighting remained the dominant use for electricity in the household for the next three decades.

It took a long time for new appliances to make a difference, because there were significant gaps between the patenting and introduction of new appliances—including the electric iron ( 1903), the vacuum cleaner (1907), the toaster (1909), the electric stove (1912), the refrigerator (1913)—and their widespread ownership. Radio was adopted the fastest of all: 75 percent of households had it by 1937.

The same dominant share was reached by refrigerators and stoves only in the 1940s—dishwashers by 1975, color TVs by 1977, and microwave ovens by 1988. Again, as expected, these diffusions followed more or less orderly S-curves.

Rising ownership of these and a range of other heavy electricity users drove the share of residential consumption to 25 percent by the late 1960s, and to about 40 percent in 2020. This share is well above Germany’s 26 percent and far above China’s roughly 15 percent. A new market for electricity is opening up, but slowly: So far, Americans have been reluctant buyers of electric vehicles, and, notoriously, they have long spurned building a network of high-speed electric trains, which every other affluent country has done.

This article appears in the June 2022 print issue as “Electricity’s Slow Rollout.”

The Conversation (2)
Robert Moskowitz26 May, 2022
SM

You are conflating, at least, infrastructure, tool invention, and consumer TCO valuation.Home computing started in the late '70s? (argue which device was the start). BBS use (X.25 dialin) for 1st users in mid-80s, but the Internet, by late 90s changed the total picture.Yes there is still home computer deserts, due to cost, but this is not a century rollout.The current changes in electrification will, more likely than not, be different to how we first started on this path.

James Weller27 May, 2022

It seems to me that all technology advancement cycles are accelerating. The rate of acceleration will be mostly dependent on virtuous cycles of cost and scaling. If we arrive at "cheap" fusion power that is truly effectively far superior to previous energy sources, I think the transition would be rather rapid. However, I do agree if the capital investment required is extremely large (and it most likely will be), it certainly could take decades to become a major player in the energy mix. Fusion would also probably have to undergo a decade or two of improvements before to start to peak in effectiveness as well. The transition to solar is limited by its own limitations and costs. In particular, utility scale energy storage is still rather expensive and that caps alternative energy's role in the energy mix.

The First Million-Transistor Chip: the Engineers’ Story

Intel’s i860 RISC chip was a graphics powerhouse

21 min read
Twenty people crowd into a cubicle, the man in the center seated holding a silicon wafer full of chips

Intel's million-transistor chip development team

In San Francisco on Feb. 27, 1989, Intel Corp., Santa Clara, Calif., startled the world of high technology by presenting the first ever 1-million-transistor microprocessor, which was also the company’s first such chip to use a reduced instruction set.

The number of transistors alone marks a huge leap upward: Intel’s previous microprocessor, the 80386, has only 275,000 of them. But this long-deferred move into the booming market in reduced-instruction-set computing (RISC) was more of a shock, in part because it broke with Intel’s tradition of compatibility with earlier processors—and not least because after three well-guarded years in development the chip came as a complete surprise. Now designated the i860, it entered development in 1986 about the same time as the 80486, the yet-to-be-introduced successor to Intel’s highly regarded 80286 and 80386. The two chips have about the same area and use the same 1-micrometer CMOS technology then under development at the company’s systems production and manufacturing plant in Hillsboro, Ore. But with the i860, then code-named the N10, the company planned a revolution.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}