The history of technology is full of breakthroughs in one field that wound up working wonders in a related one. The 300B vacuum tube, introduced by Western Electric in 1937 to amplify telephone signals, found a far more enduring use as a high-fidelity audio amplifier. The atomic clocks first used in the 1960s by the U.S. military to track Sputnik and later to validate Albert Einstein's relativity theories are now the basis of the Global Positioning System. And of course, the magnetron, invented in the 1920s at General Electric and used in radars during World War II, later found itself repurposed as the basis for the microwave oven.
Now add another tech crossover: The graphics coprocessor, invented in the 1970s to churn through voluminous and repetitive calculations and render smooth and realistic-looking images on computer screens, can now chew on large-scale databases.
Database processing is a cornerstone of computing, and it is a market that last year generated approximately US $27 billion, according to technology analysis firm Forrester Research, in Cambridge, Mass. The firm projects that this number—which includes new database licenses, technical support, and consulting—will grow to $32 billion by 2013. Every time you bid on an eBay auction, search for a movie on Netflix, look for a Kindle title on Amazon, or do a Google search, massive database applications spring into action, delving into huge quantities of data spread across tens of thousands of machines.
This radical new task for graphics chips evolved from their role as the engine of computer games. So what does sifting enterprise-class databases have in common with rendering virtual monsters in a game? Both require handling huge amounts of data: Realistic-looking virtual monsters require generating millions of pixels every second, while searching large databases involves accessing millions of records per second. So why not take the same hardware that accelerates virtual monsters and put it to work on real-world applications, like the databases that are a large part of our daily lives—more so than pixel monsters?
For the past few years, first at the University of California, Santa Cruz, and now at Oracle, we have been looking for ways to leverage the power of these graphics processors, known as graphics processing units (GPUs). These special-purpose chips are designed to be paired with a central processing unit (CPU) for applications like games and scientific visualization, which demand high graphics performance. GPUs are the progeny of the old video cards, which did nothing but display memory contents on the screen. They ease the computational burden on the CPU by handling the calculations and other simple, highly repetitive operations necessary for rendering the lines, polygons, and surfaces of a full-motion graphics scene. For the price of a low-end computer, high-end graphics cards condense into a single PC card the processing power that just 10 years ago required a supercomputer.
A GPU can deliver hundreds of billions of operations per second—some GPUs more than a teraflop, or a trillion operations per second—while requiring only slightly more electrical power and cooling than a CPU. For the same levels of power and cooling, a GPU can deliver 20 to 30 times as much total computational power. That's a lot less power per calculation.
Take the Nvidia GeForce GTX 285 graphics card, for example. For the price of a quad-core CPU ($300) and 1.5 times the power consumption (200 watts) you get a processor that can rip through 1 trillion mathematical operations per second, about 20 times as fast as a CPU. And because you can plug up to four GPUs into a single server, the GPUs could be used to retrofit existing systems. Four GPUs in place of a single CPU would mean an 80-fold increase in performance for just six times the power consumption. So, with fewer machines, you get less heat and a lower air-conditioning bill.