Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

The CPU’s Silent Partner: The Coprocessor’s Role Is Often Unappreciated

While coprocessors have taken many forms, the most important one today is the cloud


2 min read

Illustration of a person walking up stairs made of smart phones.
Illustration: Dan Page

One reason the PC has endured for nearly 40 years is that its design was almost entirely open: No patents restricted reproduction of the fully documented hardware and firmware. When you bought a PC, IBM gave you everything you needed to manufacture your own clone. That openness seeded an explosion of PC compatibles, the foundation of the computing environment we enjoy today.

In one corner of the original PC’s motherboard, alongside the underpowered-but-epochal 8088 CPU, sat an empty socket. It awaited an upgrade it rarely received: an 8087 floating-point coprocessor.

Among the most complex chips of its day, the 8087 accelerated mathematical computations—in particular, the calculation of transcendental functions—by two orders of magnitude. While not something you’d need for a Lotus 1-2-3 spreadsheet, for early users of AutoCAD those functions were absolutely essential. Pop that chip into your PC and rendering detailed computer-aided-design (CAD) drawings no longer felt excruciatingly slow. That speed boost didn’t come cheap, though. One vendor sold the upgrade for US $295—almost $800 in today’s dollars.

Recently, I purchased a PC whose CPU runs a million times as fast as that venerable 8088 and uses a million times as much RAM. That computer cost me about as much as an original PC—but in 2020 dollars, it’s worth only a third as much. Yet the proportion of my spend that went into a top-of-the-line graphics processing unit (GPU) was the same as what I would have invested in an 8087 back in the day.

Although I rarely use CAD, I do write math-intensive code for virtual or augmented reality and videogrammetry. My new coprocessor—a hefty slice of Nvidia silicon—performs its computations at 200 million times the speed of its ancestor.

That kind of performance bump can only partially be attributed to Moore’s Law. Half or more of the speedup derives from the massive parallelism designed into modern GPUs, which are capable of simultaneously executing several thousand pixel-shader programs (to compute position, color, and other attributes when rendering objects).

Such massive parallelism has its direct analogue in another, simultaneous revolution: the advent of pervasive connectivity. Since the late 1990s, it’s been a mistake to conceive of a PC as a stand-alone device. Through the Web, each PC has been plugged into a coprocessor of a different sort: the millions of other PCs that are similarly connected.

The computing hardware we quickly grew to depend on was eventually refined into a smartphone, representing the essential parts of a PC, trimmed to accommodate a modest size and power budget. And smartphones are even better networked than early PCs were. So we shouldn’t think of the coprocessor in a smartphone as its GPU, which helps draw pretty pictures on the screen. The real coprocessor is the connected capacity of some 4 billion other smartphone-­carrying people, each capable of sharing with and learning from one another through the Web or on various social-media platforms. It’s something that brings out both the best and worst in us.

We all now have powerful tools at our fingertips for connecting with others. When we plug ourselves into this global coprocessor, we can put our heads together to imagine, plan, and produce—or to conspire, harass, and thwart. We can search and destroy, or we can create and share. With the great power that this technology confers comes great responsibility. That’s something we should remind ourselves of every time we peer at our screens.

This article appears in the September 2020 print issue as “The CPU’s Silent Partner.”

The Conversation (1)
John Shepley
John Shepley17 Sep, 2021
M

In the early 1990s I worked on a program to upgrade the digital signal processor on an airborne fire control radar from the 1970's hardwired unit to a more modern programmable DSP.  In addition to being able to update the hardware with modern ICs, we decided to add one if the latest graphics processors to the design.   We used the Texas Instruments 34010 in the prototype, and by the time it went into production, the TI 34020 was available.

Because radars process large arrays of data that can be similar to graphical displays, that powerful family of chips enabled enormous advances in performance that couldn't have easily been achieved otherwise.  And because the instruction set was optimized to perform operations across arrays, the programming was simpler than would have been possible with general purpose processors.   As a result, we were able to enhance the air-to-ground mapping resolution as well as the detection and tracking of air-air targets.   As an added bonus, we created a display that would show a graphical representation of the Fast Fourier Transform (FFT) data right on the pilot's display that could be captured by the video recorders in our testbed aircraft.   

I left that industry in the mid '90s, and I can only imagine the benefits that more modern GPU's can bring.