Attempts to calculate the weather numerically have a long history. The first effort along these lines took place not in some cutting-edge university or government lab but on what the lone man doing it described as "a heap of hay in a cold rest billet." Lewis Fry Richardson, serving as an ambulance driver during World War I and working with little more than a table of logarithms, made a heroic effort to calculate weather changes across central Europe from first principles way back in 1917. The day he chose to simulate had no particular significance—other than that a crude set of weather-balloon measurements was available to use as a starting point for his many hand calculations. It's no surprise that the results didn't at all match reality.
Three decades (and one world war) later, mathematician John von Neumann, a computer pioneer, returned to the problem of calculating the weather, this time with electronic assistance, although the limitations of the late-1940s computer he was using very much restricted his attempt to simulate nature. The phenomenal advances in computing power since von Neumann's time have, however, improved the accuracy of numerical weather forecasting and allowed it to become a routine part of daily life. Will it rain this afternoon? Ask the weatherman, who in turn will consult a computer calculation.
Like weekly weather forecasting, climate simulation has benefited greatly from the steady advance of computational power. Nonetheless, there's still a long way to go. In particular, predicting the influence of clouds remains a weak link in the chain of reasoning used to make projections about changes in Earth's climate. Part of the reason is that the resolution of the global climate models in use today is too coarse to simulate individual cloud systems. To gauge their effect, today's models must rely on statistical approximations; some climatologists would be much happier if they could model cloud systems directly. The problem is that the computing oomph for that isn't available today. And it probably won't be anytime soon.
Microprocessor clock speeds are no longer increasing with each new generation of chip fabrication. So to obtain more computational horsepower, the usual strategy is to gang together many processors, each working on a piece of the problem at hand. But that solution has drawbacks, not the least of which is that it multiplies electrical demands. Indeed, the cost of the power required to run such computer systems can exceed their capital costs. This is an industry-wide problem. Companies with large computing needs, such as Google, will build facilities near hydroelectric dams to get inexpensive electricity for their data centers, which can consume 40 megawatts or more.
This power crisis also means that high-performance computing for such things as climate modeling is not going to advance at anything like the pace it has during the last two decades unless fundamentally new ideas are applied. Here we describe one possible approach. Rather than constructing supercomputers as most of them are made now, using as building blocks the kinds of microprocessors found in fast desktop computers or servers, we propose adopting designs and design principles drawn, oddly enough, from the portable-electronics marketplace. Only then will it be possible to reduce the power consumption and cost of a next-generation supercomputer to a manageable level.
Back in the 1970s and 1980s, the high-performance computing industry was focused on building the equivalent of Ferraris—high-end machines designed to drive circles around the kinds of computing hardware a normal person could buy. But by the late 1980s and early 1990s, research and development in the rapidly growing personal-computer industry dramatically improved the performance of standard microprocessors. The ensuing pace of advance was so quick that clusters of ordinary processors, the Fords and Volkswagens of the industry, all driving in parallel, soon proved as powerful as specially designed supercomputers—and at a fraction of the cost.