How to Design a New Chip on a Budget

Hardware guru bunnie Huang talks about the open-source tools he uses to design circuits, and why he wants to build his own ASIC

6 min read

Photo of bunnie Huang in front of computers.
Photo: bunnie Huang

We recently had an interesting exchange with bunnie Huang, hardware guru and creator of Chumby, NeTV, and the Novena laptop, among other things. He’s also the author of Hacking the Xbox, The Essential Guide to Electronics in Shenzhen, and not one but two feature articles in IEEE Spectrum.

We were interested in Huang’s views about whether a small, modestly funded team—say a college-dorm startup—could produce a custom chip, just the way such groups now create board-level products and software with ease.

Software ventures in particular benefit from the vast amount of open-source code that is available for use in building commercial products. (One study found that the average commercial application contains 35 percent open-source code.) We wanted to get a sense of whether chip designers also enjoyed a rich ecosystem of open-source building blocks. Or is chip design still so closed and so challenging that it’s really just for large, established companies?

IEEE Spectrum: Why would a small startup want to produce its own application-specific integrated circuit (ASIC) in the first place? Couldn’t it just use a field-programmable gate array (FPGA) for whatever product it was hatching?

Huang: FPGAs generally come in big clunky packages and consume way too much power.

ASICs are absolutely necessary for making things like hearing aids, implantable or edible medical devices, GPS trackers to be carried by animals, mobile radios, RFID devices, electronic greeting cards, or other single-purpose, disposable circuits.

Another example is the driver IC inside the WS2812 chip—it has revolutionized lighting to be able to create a single-package RGB LED with serial protocol built in by embedding a tiny ASIC with the LEDs. (I actually got to meet the guy who designed the first commercially viable variant of this.)

So there’s definitely a broad range of really useful, industry-changing products that FPGAs just can’t touch—primarily what you might call “cheap, low-power stuff.”

How would you decide when to use an FPGA and when to create an ASIC? That depends. FPGAs waste a large amount of silicon compared with an ASIC, so the cost floor, which depends in large part on the surface area of silicon required for the chip, is often an order of magnitude higher than you’d want it to be. But fabricating an ASIC isn’t cheap either.

A while back, I read the paper on Google's Tensor Processing Unit (TPU), and I thought, Damn, I want that.

I’m actually in the middle of this trap right now: I’m trying to build the next generation of NeTV, which is an FPGA-based video-processing engine. The ASICs that can accomplish such video processing cost less than half as much as an FPGA and can do a better job of it (in that they can process 4K video, whereas my FPGA solution maxes out at 1080p). But existing ASICs don’t have all the functionality I need. Because of some other constraints, though, I simply can’t afford to create an ASIC of my own for this product.

Another category of things ASICs are valuable for lies at the opposite end of the spectrum: the really high-end stuff. Let me explain with a short anecdote.

A while back, I read the paper on Google’s Tensor Processing Unit (TPU), and I thought, “Damn, I want that.” So I started looking at FPGAs to see what it would take to build something of equivalent capability.

I discovered that FPGAs that could even begin to hold a candle to Google’s TPU cost many thousands of dollars each—plus they require uber-expensive software licenses. Some large companies (like Microsoft) were able to team up with FPGA manufacturers, and presumably, Microsoft received a pretty hefty discount. So it could create some interesting hardware using FPGAs to compete with Google’s TPUs. But with such capable FPGAs, you’re talking something like US $17,000 for a single chip, at least for most people or companies.

Spectrum: What’s the least someone could spend to create an ASIC from scratch? Assume the chip is very simple. I imagine a good fraction of the cost might be just for the software to design it, no? And you’d have to know what design rules to satisfy. Is such information openly available?

Huang: I did a little research on this once upon a time. There are some open-source tools that might be able to get you there. The “SCMOS” design rules are the most workable. I think these are the design rules that Open-V was trying to use.

As for design software, you can use an open-source tool chain based on Magic (Xcircuit, IRSIM, NetGen, Qrouter, and Qflow). Or, if you can afford it, you could use close-source commercial products, like those from Cadence.
I’ve used both Magic and Cadence design flows. I personally enjoyed using Magic’s chip-layout editor more, but Cadence software is more iron clad, having been used to design so many chips. And Cadence’s tool for simulating the effects of parasitic resistance and capacitance has been well vetted.

That’s not to say that you’d need to use Cadence or the like. I think you could use Magic to build some ICs that have really wide tolerances—the sorts of ICs that you might find inside LED drivers and maybe even stuff like hearing aids. It might be a bit of a challenge to do RF design, because the open-source tools to simulate parasitic effects might not be up to the task. But there is a methodology to refine models that should allow you to develop a successful design in two or three chip runs.

FPGAs waste a large amount of silicon compared with an ASIC, so the cost floor is often an order of magnitude higher than you’d want it to be

All to say, down to around maybe the 180-nanometer technology node (or so), you could get away with using open-source tools. Smaller than that, though, requires doing really funny stuff with the mask imaging and using shapes that aren’t just simple polygons anymore. And the design kits from various vendors to accomplish that get more and more closed.

180 nm is pretty “big” by today’s standards. But if you really wanted to place some special combination of circuits on a single silicon die, you could do it this way. And it could lead to some novel products that would otherwise be impossible with discrete designs. Be aware, though, that wafer-level chip-scale packaging (WL-CSP) allows printed-circuit-board integrators to come pretty close to what you might achieve with your custom ASIC.

How much would that custom ASIC cost to make? Estimating the cost of mask and chip fabrication is difficult because price lists are confidential. But stories I’ve heard suggest that a simple ASIC (say one that is a few square millimeters in size, fabricated using the 250-nm technology node) might cost a few thousand bucks for a couple dozen samples.

That price is compelling enough that I once toyed with the idea of fabricating an 8 or 16-bit CPU that would be totally inspectable. It might appeal to those really security-conscious folks who want to be sure there’s nothing funny in the microprocessor they are using. I figured that having such a thing fabricated would set me back a few thousand dollars. That’s comparable to the cost of developing any product, really. And it’s certainly within the range of bootstrapping.

Spectrum: Okay, let’s say a small startup uses free design tools and finds enough money to make at least a few chip-production runs. Would the designers be able to draw on open-source circuit designs as building blocks, the way software developers routinely do?

Huang: At the moment, there is a pretty reasonable repository of free-and-open circuit blocks specified at the register-transfer level (RTL), which is what is normally used in the design of digital chips. This includes the RISC-V microprocessor but also designs like the lm32, mor1kx, and so forth.

There’s also a fair amount of “wishbone-compatible” designs, including stuff like Ethernet bridges and UARTs. The OpenCores project has a pretty reasonable list of such blocks, some of which have even been incorporated in ASICs (most of it targets FPGAs, though).

Certain things require intimate process knowledge to execute—knowledge that the foundries that make chips may never release

As for mixed-signal and analog, open designs are quite scarce. The exciting part about Open-V was their willingness to open and share analog and mixed-signal blocks. These designs can’t be applied to more advanced technology nodes of fabrication, but at least in the “SCMOS” range, there’s a reasonable chance the designs can be turned into working chips.

It may be a long time, though, until there’s a good set of analog and mixed-signal design blocks out there that have been tested in ASICs, unfortunately. Other things that may never be truly available through open designs include memory blocks like SRAM, DRAM, flash, and electrically programmable fuses. This is because these things require intimate process knowledge to execute—knowledge that the foundries that make chips may never release.

All to say: A small player could certainly design its own ASIC and have it fabricated with nothing more than some ingenuity and a few thousand dollars. But it wouldn’t be able to create a sophisticated design or use a state-of-the art technology node.

The Conversation (0)