About 15 months ago, The Gordon and Betty Moore Foundation awarded US $13.5 million to a five-year project involving an international collection of universities and national labs to start work on shrinking particle accelerators so that they could fit on a chip. The project, dubbed “Accelerator on a Chip” could have a profound impact on both fundamental science research and medicine.
In a nutshell, the aim is to use lasers and a piece of nanostructured silicon or glass about the size of a grain of rice to accelerate electrons at a rate up to 30 times higher thantypical values for conventional technology. The resulting technology could potentially match the power of SLAC's 3.2-kilometer-long linear accelerator in as little as 100 meters.
In a visit to the offices of Joel England, a fellow at the Advanced Accelerator Research Department at SLAC and one of the leaders of the Accelerator-on-a-Chip project, we got some insights into the science and technology involved and how it is proceeding.
Ever since Jeremy Dahl of Stanford University first isolated the molecules known as “diamondoids” from crude oil in 2003, the material science community has been fascinated with their potential. Diamondoids, which are both the smallest and purest form of diamond, have a unique set of properties that have led scientists to consider their use in applications including quantum computation and enabling so-called diamondoid mechanosynthesis (DMS), in which diamondoid structures are built using a programmable molecular positioning approach.
While DMS may still be a long ways off, researchers at Stanford and the U.S. Department of Energy’s SLAC National Accelerator Laboratory in Menlo Park are continuing to experiment with the material. They have developed more short-term applications for the molecule, while still keeping their eyes on the horizon for its long-term potential.
I’ll admit it: journalists like milestones. Nice round numbers and anniversaries make for good headlines. So my ears certainly perked up on Tuesday when Intel said that it can now pack more than 100 million transistors in each square millimeter of chip “for the first time in our industry’s history,” said Kaizad Mistry, a vice president and co-director of logic technology at the company. Delivering more transistors in the same area means the circuitry can be made smaller, saving on cost, or it means that more functionality can be added to a chip without having to make it bigger.
The nice round 100 million milestone (100.8 million, to be exact) belongs to Intel’s latest-and-greatest chip generation: 10 nanometers. For those uninitiated in semiconductor lingo, the 10 nm designation is a reference to the “node” or manufacturing technology used to make such chips. As a general rule, the smaller the number, the denser the circuitry. But even though node names look like measurements, today the numbers don’t really correspond to the size of any particular feature and there can be significant variation between companies.
When I wrote about Intel’s 10-nm plans in our January issue previewing the coming year in technology, the company was not yet ready to say much publicly about the specific dimensions of the transistors. This week, they were more forthcoming with figures [pdf]: it’s 34 nm from one fin to the next in the company’s FinFET transistors and 36 nm from one wire to the next in the most dense interconnect layers (down from 42 nm and 52 nm, respectively, in the previous, 14-nm chip generation).
Shorter distances such as these mean that 10nm chips can pack significantly more transistors in a given area. The 100 million density figure comes from a metric that Intel senior fellow Mark Bohr has proposed the industry resurrect, in order to better compare chipmaker offerings. Instead of measuring a chip manufacturing generation by an area taken up by a certain component or set of components, Bohr proposes that we instead measure chip generations by their transistor density—in particular, by the number spit out by an equation that combines the transistor density in a standard 2-input NAND cell and a scan flip-flop logic cell.
And by that metric, Bohr says, Intel has more than doubled its transistor density in recent years. From 22nm to 14nm, the transistor density jumped by a factor of 2.5x. And in the move from 14-nm to 10-nm chip manufacturing technology, the jump was 2.7x, from 37.5 million transistors per square millimeter to more than 100 million. Crucially, the company says, the 10-nm transistors have the capacity for higher speed and greater energy efficiency than their predecessors (although, when I spoke with Bohr late last year, he said the focus lately has been on the latter).
It remains to be seen whether the industry will agree that the new metric is a meaningful one. In comments to EE Times, one analyst said transistor count over a larger area, closer to the size of a real chip, would be a more relevant metric. And an unnamed spokesperson from rival chipmaker TSMC told the site: “I have no idea how Intel does its new calculation...for example, its [first-generation 14nm CPU] Broadwell used to have 18.4 million transistors per mm squared, yet under the new measure it suddenly has 37.5 million transistors per mm2. Are they trying to play paper games?”
Not so fast, says Intel. After this story originally posted, a company representative wrote to IEEE Spectrum to stress the difference between talking about a chip’s transistor density and this metric, which is designed to assess the capabilities of a manufacturing node. “Simply taking the total transistor count of a chip and dividing by its area is not meaningful because of the large number of design decisions that can affect it–factors such as cache sizes and performance targets can cause great variations in this value,” Bohr wrote in his proposal, titled “Let’s Clear Up the Node Naming Mess.”
Even if there is room to quibble over that specific 100 million figure, Intel is also saying that it is more than doubling transistor density with each new chip generation—and that this more aggressive level of miniaturization helps to counteract the slower cadence that has recently set in with respect to the introduction of each new generation. On balance, Intel said, the company is still on a pace that roughly corresponds to a doubling of transistor density every couple of years.
Intel calls the suite of strategies it uses to accomplish this more-than-doubling “hyperscaling.” It includes design improvements, but a big piece is the company’s approach to laying down the patterns that ultimately become the chip’s transistors and wiring, which Intel fellow Ruth Brain outlined in her talk [pdf].
With its 14-nm chips, Brain said, Intel began using a strategy called self-aligned double patterning (SADP). SADP is a form of multiple patterning, a range of strategies that can be used to make chip features much smaller than the 193-nm light that is used to print them by splitting the patterning process into multiple steps.
Other companies, Brain said, use a simple multiple patterning approach that essentially prints the same pattern multiple times, offset slightly. But that technique relies on a lithography machine’s ability to pinpoint the same spot for each exposure, and variability in this process can degrade chip performance and lower the number of usable chips produced. SADP splits up the patterning in a different way, to sidestep this “overlay” issue.
With 10-nm chips, Intel as adopted self-aligned quadruple patterning (SAQP), a similar approach that requires four passes through a lithography machine. Mistry says SAQP has one more generation in it, which would take Intel down to the feature sizes needed to produce the next generation: 7 nm.
Somewhere in there, we may just see extreme ultraviolet (EUV) lithography enter the picture. EUV uses 13.5-nm radiation (pretty much X-rays) instead of 193-nm ultraviolet light for feature patterning.
But back to the present and that 100 million transistors per square millimeter figure. It’s easy to underplay the engineering feats that go into making that sort of milestone (assuming it stands the test of time) possible. “You know one of the remarkable things about Moore’s Law is that Moore’s Law’s past seems preordained and ordinary, and Moore’s Law’s future is difficult and requires inventions,” Mistry told IEEE Spectrum.
Now, he says, the FinFET transistor seems par for the course, but it wasn’t when Intel introduced the technology in 2011. “All these things are difficult, but once they’re done they seem normal,” he adds. “And that’s the magic of Moore’s Law.”
This article was updated on 31 March to add a response from Intel to the comments from TSMC and to correct a caption.
In a meeting last week with Chong Liu, the post-doc in Yi Cui’s lab at Stanford who was the lead author of that research, it appears water purification is just the start for the capabilities of this line of research that has had a number of incarnations.
An international team of researchers has developed a low-power gas sensor chip that can operate at room temperature, making possible the development of personal air-quality monitoring devices that we could carry around with us.
In research described in the journal Science Advances, the team of researchers fabricated a chemical-sensitive field-effect transistor (CS-FET) platform based on 3.5-nanometer-thin silicon channel transistors. The platform, which is highly sensitive but consumes a small amount of power, can detect a wide range of different gases.
Ted Sargent and his team at the University of Toronto have done many things with quantum dots: boost solar cell efficiency, invent infrared imagers, optoelectronics you can apply with a paint brush. Now Sargent and his team have added a new spice to their recipe for colloidal quantum dots that promises to change the struggling prospects of quantum dot-based lasers. If the new approach lives up to its promise, it could lead to brighter, less expensive, and tunable lasers for video projectors and medical imaging among other applications.
A team of researchers based in Switzerland is on the way to laying bare much of the secret technology inside commercial processors. They pointed a beam of X-rays at a piece of an Intel processor and were able to reconstruct the chip’s warren of transistors and wiring in three dimensions. In the future, the team says, this imaging technique could be extended to create high-resolution, large-scale images of the interiors of chips.
The technique is a significant departure from the way the chip industry currently looks inside finished chips, in order to reverse engineer them or check that their own intellectual property hasn’t been misused. Today, reverse engineering outfits progressively remove layers of a processor and take electron microscope images of one small patch of the chip at a time.
But “all it takes is a few more years of this kind of work, and you'll pop in your chip and out comes the schematic,” says Anthony Levi of the University of Southern California. “Total transparency in chip manufacturing is on the horizon. This is going to force a rethink of what computing is”, he says, and what it means for a company to add value in the computing industry.
Research that started out with the humble aim of growing an atomically flat, single crystalline gold surface, ultimately morphed into a team of German and Israeli scientists using the gold surface they came up with for a novel form of data storage.
In research published in the journal Science, a team of scientists from Technion-Israel Institute of Technology and the German universities of Stuttgart, Duisburg-Essen, and Kaiserslautern and the University of Dublin in Ireland has developed a way to exploit the orbital angular momentum of light in a confined device using plasmonics.
Prior to this work, some scientists suggested using the orbital angular momentum of photons as a means for data storage in the open air or in optical fibers. This latest research makes it possible to envision it being used in confined, chip-scale devices.
More of this research is already trickling in since the Nobel Prize announcement. Two teams of researchers from the University of Santiago de Compostela (USC) in Spain have cited this most recent Nobel Prize as a context for their work in developing self-assembling materials based on peptides (compounds consisting of two or more amino acids linked together in a chain) that can stack themselves on top of each other to form nanotubes.
An international team of researchers have produced the world’s smallest magnet and demonstrated that it’s possible to use that magnet—an individual atom—to store a single bit of data.
Until this latest research, led by teams at IBM Research Almaden and École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, molecules were the smallest-ever data storage units. To put this advance in context, think about it like this: With one bit per atom, it would be conceivable to store an entire iTunes library of 35 million songs on a device no bigger than a credit card.