PlayStation 3 Processor Speeds Financial-Risk Calculation

Georgia Tech's parallel processing code for PlayStation 3's Cell processor makes mountains of random numbers to improve Value-at-Risk and other important algorithms

3 min read

19 November 2008—Of the many things that have gone wrong on Wall Street this past year, the use and misuse of computational algorithms meant to give financiers a clear picture of the risk of big losses was one of them. One important calculation, called Value-at-Risk (VaR), is a way of assessing the probability that an investment portfolio will lose a specified value over a certain period of time. Though VaR’s reputation is much maligned, experts say firms have little choice but to continue, if not accelerate, their use of computational algorithms as the need to calculate risk and value has become more acute. The surviving financial firms might get a little help from code produced for Cell, the processor behind Sony’s PlayStation 3 as well as a number of high-end IBM servers and supercomputers.

By taking advantage of Cell’s unique architecture, computer scientists at Georgia Tech, in Atlanta, say they have found a way to accelerate the generation of random numbers up to 33-fold compared with what’s possible using commercially available Intel or AMD processors. Random-number generation is key to many so-called Monte Carlo simulations, but as a proof of concept, the researchers built their random-number generator into a program that efficiently runs a VaR algorithm. David A. Bader, executive director of high-performance computing at Georgia Tech, says his group has been working with several financial firms, whose names he would not disclose due to legal agreements, on their use of the VaR software. Bader says the source code behind the random-number generators for Cell will be made available during the SC08 supercomputing conference this week in Austin, Texas.

Keep Reading ↓Show less

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions

3D-Stacked CMOS Takes Moore’s Law to New Heights

When transistors can’t get any smaller, the only direction is up

10 min read
An image of stacked squares with yellow flat bars through them.
Emily Cooper
Green

Perhaps the most far-reaching technological achievement over the last 50 years has been the steady march toward ever smaller transistors, fitting them more tightly together, and reducing their power consumption. And yet, ever since the two of us started our careers at Intel more than 20 years ago, we’ve been hearing the alarms that the descent into the infinitesimal was about to end. Yet year after year, brilliant new innovations continue to propel the semiconductor industry further.

Along this journey, we engineers had to change the transistor’s architecture as we continued to scale down area and power consumption while boosting performance. The “planar” transistor designs that took us through the last half of the 20th century gave way to 3D fin-shaped devices by the first half of the 2010s. Now, these too have an end date in sight, with a new gate-all-around (GAA) structure rolling into production soon. But we have to look even further ahead because our ability to scale down even this new transistor architecture, which we call RibbonFET, has its limits.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}