Nanoclast iconNanoclast

An image of the inside of Intel's D1X research fab in Hillsboro, Oregon

Intel Now Packs 100 Million Transistors in Each Square Millimeter

I’ll admit it: journalists like milestones. Nice round numbers and anniversaries make for good headlines. So my ears certainly perked up on Tuesday when Intel said that it can now pack more than 100 million transistors in each square millimeter of chip “for the first time in our industry’s history,” said Kaizad Mistry, a vice president and co-director of logic technology at the company. Delivering more transistors in the same area means the circuitry can be made smaller, saving on cost, or it means that more functionality can be added to a chip without having to make it bigger. 

The news came during Intel’s Technology and Manufacturing Day, a behind-the-scenes look at the company’s latest chip classes and packaging technology, and another opportunity for the chipmaker to declare that Moore’s Law is alive and well—at least for Intel. 

The nice round 100 million milestone (100.8 million, to be exact) belongs to Intel’s latest-and-greatest chip generation: 10 nanometers. For those uninitiated in semiconductor lingo, the 10 nm designation is a reference to the “node” or manufacturing technology used to make such chips. As a general rule, the smaller the number, the denser the circuitry. But even though node names look like measurements, today the numbers don’t really correspond to the size of any particular feature and there can be significant variation between companies.

When I wrote about Intel’s 10-nm plans in our January issue previewing the coming year in technology, the company was not yet ready to say much publicly about the specific dimensions of the transistors. This week, they were more forthcoming with figures [pdf]: it’s 34 nm from one fin to the next in the company’s FinFET transistors and 36 nm from one wire to the next in the most dense interconnect layers (down from 42 nm and 52 nm, respectively, in the previous, 14-nm chip generation).

Shorter distances such as these mean that 10nm chips can pack significantly more transistors in a given area. The 100 million density figure comes from a metric that Intel senior fellow Mark Bohr has proposed the industry resurrect, in order to better compare chipmaker offerings. Instead of measuring a chip manufacturing generation by an area taken up by a certain component or set of components, Bohr proposes that we instead measure chip generations by their transistor density—in particular, by the number spit out by an equation that combines the transistor density in a standard 2-input NAND cell and a scan flip-flop logic cell. 

And by that metric, Bohr says, Intel has more than doubled its transistor density in recent years. From 22nm to 14nm, the transistor density jumped by a factor of 2.5x. And in the move from 14-nm to 10-nm chip manufacturing technology, the jump was 2.7x, from 37.5 million transistors per square millimeter to more than 100 million. Crucially, the company says, the 10-nm transistors have the capacity for higher speed and greater energy efficiency than their predecessors (although, when I spoke with Bohr late last year, he said the focus lately has been on the latter). 

It remains to be seen whether the industry will agree that the new metric is a meaningful one. In comments to EE Times, one analyst said transistor count over a larger area, closer to the size of a real chip, would be a more relevant metric. And an unnamed spokesperson from rival chipmaker TSMC told the site: “I have no idea how Intel does its new calculation...for example, its [first-generation 14nm CPU] Broadwell used to have 18.4 million transistors per mm squared, yet under the new measure it suddenly has 37.5 million transistors per mm2. Are they trying to play paper games?”

Not so fast, says Intel. After this story originally posted, a company representative wrote to IEEE Spectrum to stress the difference between talking about a chip’s transistor density and this metric, which is designed to assess the capabilities of a manufacturing node. “Simply taking the total transistor count of a chip and dividing by its area is not meaningful because of the large number of design decisions that can affect it–factors such as cache sizes and performance targets can cause great variations in this value,” Bohr wrote in his proposal, titled “Let’s Clear Up the Node Naming Mess.”

Even if there is room to quibble over that specific 100 million figure, Intel is also saying that it is more than doubling transistor density with each new chip generation—and that this more aggressive level of miniaturization helps to counteract the slower cadence that has recently set in with respect to the introduction of each new generation. On balance, Intel said, the company is still on a pace that roughly corresponds to a doubling of transistor density every couple of years.

Intel calls the suite of strategies it uses to accomplish this more-than-doubling “hyperscaling.” It includes design improvements, but a big piece is the company’s approach to laying down the patterns that ultimately become the chip’s transistors and wiring, which Intel fellow Ruth Brain outlined in her talk [pdf]. 

With its 14-nm chips, Brain said, Intel began using a strategy called self-aligned double patterning (SADP). SADP is a form of multiple patterning, a range of strategies that can be used to make chip features much smaller than the 193-nm light that is used to print them by splitting the patterning process into multiple steps.

Other companies, Brain said, use a simple multiple patterning approach that essentially prints the same pattern multiple times, offset slightly. But that technique relies on a lithography machine’s ability to pinpoint the same spot for each exposure, and variability in this process can degrade chip performance and lower the number of usable chips produced. SADP splits up the patterning in a different way, to sidestep this “overlay” issue.

With 10-nm chips, Intel as adopted self-aligned quadruple patterning (SAQP), a similar approach that requires four passes through a lithography machine. Mistry says SAQP has one more generation in it, which would take Intel down to the feature sizes needed to produce the next generation: 7 nm. 

Somewhere in there, we may just see extreme ultraviolet (EUV) lithography enter the picture. EUV uses 13.5-nm radiation (pretty much X-rays) instead of 193-nm ultraviolet light for feature patterning.

But back to the present and that 100 million transistors per square millimeter figure. It’s easy to underplay the engineering feats that go into making that sort of milestone (assuming it stands the test of time) possible. “You know one of the remarkable things about Moore’s Law is that Moore’s Law’s past seems preordained and ordinary, and Moore’s Law’s future is difficult and requires inventions,” Mistry told IEEE Spectrum. 

Now, he says, the FinFET transistor seems par for the course, but it wasn’t when Intel introduced the technology in 2011. “All these things are difficult, but once they’re done they seem normal,” he adds. “And that’s the magic of Moore’s Law.”

This article was updated on 31 March to add a response from Intel to the comments from TSMC and to correct a caption.

Chong Liu of Stanford University

Nanostructures Move From Water Purification to Uranium Extraction

Last August, we reported on work out of the U.S. Department of Energy’s SLAC National Accelerator Laboratory and Stanford University in which the nanomaterial molybdenum disulfide was used to kill 99.999 percent of bacteria in water within just 20 minutes—a process that would otherwise take up to two days if only the ultraviolet (UV) light from the sun were used as a disinfectant.

In a meeting last week with Chong Liu, the post-doc in Yi Cui’s lab at Stanford who was the lead author of that research, it appears water purification is just the start for the capabilities of this line of research that has had a number of incarnations.

Read More
In a proof-of concept experiment the researchers attached a CS-FET chip with H2 sensors to a drone, creating an aerial chemical sensing probe.

Nanochip Gas Sensors Promise Personal Air Quality Monitors in Our Pockets

An international team of researchers has developed a low-power gas sensor chip that can operate at room temperature, making possible the development of personal air-quality monitoring devices that we could carry around with us.

In research described in the journal Science Advances, the team of researchers fabricated a chemical-sensitive field-effect transistor (CS-FET) platform based on 3.5-nanometer-thin silicon channel transistors. The platform, which is highly sensitive but consumes a small amount of power, can detect a wide range of different gases.

Read More
Solution of quantum dots glows bright red when in absorbs light from a UV lamp underneath.

Flying Saucer Quantum Dots: The Secret to Better, Brighter Lasers

Ted Sargent and his team at the University of Toronto have done many things with quantum dots: boost solar cell efficiency, invent infrared imagers, optoelectronics you can apply with a paint brush. Now Sargent and his team have added a new spice to their recipe for colloidal quantum dots that promises to change the struggling prospects of quantum dot-based lasers. If the new approach lives up to its promise, it could lead to brighter, less expensive, and tunable lasers for video projectors and medical imaging among other applications.

Read More
A reconstruction of the wiring and transistors of an Intel G3260 processor

X-rays Map the 3D Interior of Integrated Circuits

A team of researchers based in Switzerland is on the way to laying bare much of the secret technology inside commercial processors. They pointed a beam of X-rays at a piece of an Intel processor and were able to reconstruct the chip’s warren of transistors and wiring in three dimensions. In the future, the team says, this imaging technique could be extended to create high-resolution, large-scale images of the interiors of chips. 

The technique is a significant departure from the way the chip industry currently looks inside finished chips, in order to reverse engineer them or check that their own intellectual property hasn’t been misused. Today, reverse engineering outfits progressively remove layers of a processor and take electron microscope images of one small patch of the chip at a time.

But “all it takes is a few more years of this kind of work, and you'll pop in your chip and out comes the schematic,” says Anthony Levi of the University of Southern California. “Total transparency in chip manufacturing is on the horizon. This is going to force a rethink of what computing is”, he says, and what it means for a company to add value in the computing industry.

Read More
Symbolic image of light interacting with a gold surface with 4-fold symmetric Archimedean spirals: Plasmons with orbital angular momentum are excited and swirl towards the center.

Combining Twisted Light and Plasmons Could Supercharge Data Storage

Research that started out with the humble aim of growing an atomically flat, single crystalline gold surface, ultimately morphed into a team of German and Israeli scientists using the gold surface they came up with for a novel form of data storage.

In research published in the journal Science, a team of scientists from Technion-Israel Institute of Technology and the German universities of Stuttgart, Duisburg-Essen, and Kaiserslautern and the University of Dublin in Ireland has developed a way to exploit the orbital angular momentum of light in a confined device using plasmonics.

Prior to this work, some scientists suggested using the orbital angular momentum of photons as a means for data storage in the open air or in optical fibers. This latest research makes it possible to envision it being used in confined, chip-scale devices.

Read More
Molecules self assembling

The Nobelists and Their Molecular Machines

While the prospects of molecular nanotechnology—the catch-all term for molecular manufacturing in which nanoscale machines are programmed to build macroscale objects from the bottom up—has remained mostly in the realm of science fiction, the awarding of last year’s Nobel Prize in chemistry to a trio of scientists who pioneered the development of nanomachines has buoyed hope that at least we should begin to see more research in the field.

More of this research is already trickling in since the Nobel Prize announcement. Two teams of researchers from the University of Santiago de Compostela (USC) in Spain have cited this most recent Nobel Prize as a context for their work in developing self-assembling materials based on peptides (compounds consisting of two or more amino acids linked together in a chain) that can stack themselves on top of each other to form nanotubes.

Read More
Dr. Christopher Lutz of IBM Research - Almaden in San Jose, Calif. with IBM Research's Nobel-prize winning microscope he used to store data on a single atom magnet.

Single Atom Serves as World's Smallest Magnet and Data Storage Device

An international team of researchers have produced the world’s smallest magnet and demonstrated that it’s possible to use that magnet—an individual atom—to store a single bit of data.

Until this latest research, led by teams at IBM Research Almaden and École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, molecules were the smallest-ever data storage units. To put this advance in context, think about it like this: With one bit per atom, it would be conceivable to store an entire iTunes library of 35 million songs on a device no bigger than a credit card.

Read More

New Microscopy Tech Offers a Kind of “Nano-GPS” for Measuring Magnetism of Atoms

Researchers at IBM Research Alamaden have developed a new approach to measuring the magnetic field of individual atoms that for the first time gives scientists the ability to put the sensor exactly next to the atom they want to measure, providing them with a strong and direct signal of the magnetic field. The energy resolution that the new technology provides is more than 1000 times higher than other microscopic techniques, according to its inventors.

The technique involves purposely placing a "sensor" atom near the “target” atom to measure the latter’s magnetic field.  These sensor atoms—also known as electron spin resonance (ESR) sensors—were first developed by IBM back in 2015 and are used inside of scanning tunneling microscopes (STMs). STMs—which detect the tunneling of electrons between the an ultra-sharp probe as it’s scanned across a surface—allow atom-by-atom engineering, so that the positions of both the sensor and the target atoms can be imaged to locate them with atomic precision.

This latest advance in STMs with ESR technology described in the journal Nature Nanotechnology marks a distinct change from how the magnetic fields of atoms have previously been measured.

“We have shown in the paper how to perform a kind of ‘nano-GPS’ imaging, to detect where other magnetic atoms were located purely by the spin resonance signal on several fixed sensor atoms,” says Christopher Lutz, a staff scientist at IBM Research Almaden. “We intend to use this to image where magnetic centers are in molecules and nanostructures on the surface.”

Read More
DNA double helix

Sudoku Hints at New Encoding Strategy for DNA Data Storage

Researchers affiliated with Columbia University and the New York Genome Center have reported a new encoding method that makes it possible to come close to the theoretical maximum for DNA data storage.

In research published in the journal Science, the team says its encoding method achieved a 60% increase in storage capacity over previously reported efforts, resulting in a jaw-dropping storage density of 215 petabytes per gram of DNA. For perspective, one petabyte is equivalent to 13.3 years worth of HD video.

Last year, Microsoft announced that its researchers had set the DNA data-storage record of 200 megabytesWhile that was a good indication of how far DNA data storage had come, it remained pretty short on detail, with the announcement coming only in blog post on the Microsoft website. A peer-reviewed paper seemed to be sorely lacking to those in the field.

“Our work is the first one in the literature to show that you can get very close to the theoretical capacity of DNA storage architecture,” said Yaniv Erlich, Assistant Professor of Computer Science at Columbia and Core Member of the NY Genome Center, in an interview with IEEE Spectrum. In fact, Erlich and his co-author Dina Zielinski of the New York Genome Center report coming within 14% of the theoretical limit.

Read More
Advertisement

Nanoclast

IEEE Spectrum’s nanotechnology blog, featuring news and analysis about the development, applications, and future of science and technology at the nanoscale.

 
Editor
Dexter Johnson
Madrid, Spain
Contributor
Rachel Courtland
New York City
 
Load More