The July 2022 issue of IEEE Spectrum is here!

Close bar

Bespoke Processors: A New Path to Cheap Chips

Engineers can cut size and power in half by stripping away unused logic gates from general-⁠purpose microcontrollers

4 min read
Photo: iStockphoto
Photo: iStockphoto

“Processors are overdesigned for most applications," says Rakesh Kumar, an associate professor of electrical and computer engineering at the University of Illinois. It's a well-known and necessary truth: In order to have programmability and flexibility, there's simply going to be more stuff on a processor than any one application will use. That's especially true of the type of ultralow-power microcontrollers that drive the newest embedded computing platforms such as wearables and Internet of Things sensors. These are often running one fairly simple application and nothing else (not even an operating system), meaning that a large fraction of the circuits on a chip never, ever see a single bit of data.

Kumar, University of Minnesota assistant professor John Sartori (formerly a student of Kumar's), and their students decided to do something about all that waste. Their solution is a method that starts by looking at the design of a general-purpose microcontroller. They came up with a rapid way of identifying which individual logic gates are never engaged for the application it's going to run. They then strip away all those excess gates. The result is what Kumar calls a “bespoke processor." It's a physically smaller, less⁠-⁠complex version of the original microcontroller, designed to perform only the application needed. Kumar and Sartori detailed the bespoke processor project in June at the 44th International Symposium on Computer Architecture, in Toronto.

Keep Reading ↓Show less

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions

3 Ways 3D Chip Tech Is Upending Computing

AMD, Graphcore, and Intel show why the industry’s leading edge is going vertical

8 min read
Vertical
A stack of 3 images.  One of a chip, another is a group of chips and a single grey chip.
Intel; Graphcore; AMD
DarkBlue1

A crop of high-performance processors is showing that the new direction for continuing Moore’s Law is all about up. Each generation of processor needs to perform better than the last, and, at its most basic, that means integrating more logic onto the silicon. But there are two problems: One is that our ability to shrink transistors and the logic and memory blocks they make up is slowing down. The other is that chips have reached their size limits. Photolithography tools can pattern only an area of about 850 square millimeters, which is about the size of a top-of-the-line Nvidia GPU.

For a few years now, developers of systems-on-chips have begun to break up their ever-larger designs into smaller chiplets and link them together inside the same package to effectively increase the silicon area, among other advantages. In CPUs, these links have mostly been so-called 2.5D, where the chiplets are set beside each other and connected using short, dense interconnects. Momentum for this type of integration will likely only grow now that most of the major manufacturers have agreed on a 2.5D chiplet-to-chiplet communications standard.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}