Nanoclast iconNanoclast

Artistic illustration of a photonic integrated device

Integrated Photonic Circuits Shrunk Down to the Smallest Dimensions Yet

In a major breakthrough for optoelectronics, researchers at Columbia University have made the smallest yet integrated photonic circuit. In the process, they have managed to attain a high level of performance over a broad wavelength range, something not previously achieved.

The researchers believe their discovery is equivalent to replacing vacuum tubes in computers with semiconductor transistors—something with the potential to completely transform optical communications and optical signal processing.

The research community has been feverishly trying to build integrated photonic circuits that can be shrunk to the size of integrated circuits (ICs) used in computer chips. But there’s a big problem: When you use wavelengths of light instead of electrons to transmit information, you simply can’t compress the wavelengths enough to work in these smaller chip-scale dimensions.

Read More
The brass block serves as an electric ground plate ensuring an efficient insertion of the RF currents to the antennae and, on the other hand, microwave connectors mounted to the block allow for the embedding of the device into our microwave setup

Move Over, Spintronics: Here Comes Magnonics to the Rescue of Electronics

As we approach the physical limits of electrical currents performing the same logic computations as previous generations of digital electronics, the question has become: How do we continue to fabricate logic gates when the devices are too small for classical physics?

A European collaborative research center called Spin+X has offered a prototype of a device that leverages something called spin waves, which may offer a way forward. Spin waves are the synchronous waves of electron spin alignment observed in a magnetic system. If the prototype is any indication, then researchers may have another avenue to explore when traditional electronics reaches its physical limits.

Spin+X is funded by the German Research Foundation as well as the European Union–funded project InSpin. It's also been supported by the Belgian nanotechnology research institute Imec. This concentration of expertise has led to the recent development of what the researchers have dubbed a spin-wave majority gate.

Traditional semiconductor-based logic gates—known as majority gates—output current to match either the “0” or “1” state that comprises one of its three input currents, or three voltages. In the spin-wave majority gate described in the journal Applied Physics Letters, the researchers built it out of yttrium-iron-garnet (YIG); its basic operating principle relies on the magnetic material’s atomic magnetic moments, which are essentially the strength of an atom’s magnetism. When these magnetic moments are aligned by an externally applied magnetic field, they interact with each other.

“The interaction can be very well visualized by simply imagining two bar magnets,” explains Tobias Fischer, a doctoral student at the University of Kaiserslautern, in Germany, and lead author of the paper, in an email interview with IEEE Spectrum. “If one brings them together closely and moves one of the magnets, the second magnet will also be influenced by the first magnet's motion.”

The same holds true for the atomic magnetic moments, according to Fischer. When the researchers locally apply a magnetic radio frequency (RF) field in the input wave guides (which, in this case, is produced by sending RF currents through the copper structures underneath the three inputs of a trident structure), this forces some moments to precess around the direction of the external field. The phenomenon known as “precess” occurs when atoms with unpaired electron spins are placed in a magnetic field and rotate around the magnetic field at a precise frequency. This frequency depends on the field strength and the atom's magnetic moment.

The waves excited in the three input wave guides propagate toward the combiner of the device and interfere with each other, resulting in a wave propagating toward the output. Since a spin wave in a wave guide comes with a stray magnetic field around the wave guide, this can again be picked up inductively by another antenna underneath the output wave guide. The phase of the output wave now depends on the phases of the input wave, which is used to encode and process information.

“The interaction between the magnetic moments also makes neighboring moments start to precess,” says Fischer. “This wave-like excitation begins to propagate through the magnetic material and that is what we call spin wave (or, in the particle picture, magnon).”

The term “magnon” refers to the quasiparticles of spin waves and explains why this field of research is being called "magnonics." In contrast to spintronics, which only makes use of the electric charge as a property of electrons as well as its spin moment, magnonics employs spin-wave excitations in magnetic materials.

“Basically, spintronics still requires electric currents but usually restricts these currents to consisting only of spin-up or spin-down electrons, thus providing an additional degree of freedom to process or encode information,” explains Fischer. “However, magnonics can operate without any electric currents by only relying on the propagation of spin waves in a magnetic material as a carrier of information.”

This ability leads to some pretty clear advantages for magnonics, according to Fischer. Since it avoids electric currents, losses such as Joule heating can be drastically reduced. Also, spin waves can feature wavelengths in the nanometer range and gigahertz frequencies, which allows for downscaling of devices and high clock frequencies.

Nevertheless, there are still some challenges to be overcome, such as the efficient excitation and detection of spin waves in order to couple magnonics to conventional electronics.

While there has been another majority-gate device based on magnons, according to Fischer, that device was based on the excitation of magnons via spin-current injection from adjacent platinum structures and the propagation of magnons in a plain film of magnetic material. “As a consequence, this device would not be suitable to make use of the advantages of a wave-guide-based majority gate such as mode selection in the output wave guide,” he adds.

One of the areas that will need to be addressed in future research will be the material science. While YIG features a very low damping resulting in spin waves propagating long distances, CMOS compatibility of this material is rather limited, according to Fischer. “It would also be nice to have a material which can easily be deposited by conventional sputtering techniques, which is also not the case for YIG,” he adds.

In addition, the device has to be significantly miniaturized. Toward this end, Fischer and his colleagues are looking into fabricating majority gate structures from Heusler thin films, which are mixtures of elements that together have desirable thermoelectric properties.

Fischer adds, “All in all, I think there are still challenges to be overcome until a real implementation of spin-wave devices in information technology comes within reach, but I think we are well on track with investigating the fundamentals of such a concept.”

Image credit:  Photo: Stefan Wachter

The Most Complex 2D Microchip Yet

A three-atom-thick microchip with more than 100 transistors is the most complex microprocessor made from a 2-dimensional material to date, researchers say.

The new device is made of a thin film of molybdenite, or molybdenum disulfide (MoS2), which consists of a sheet of molybdenum atoms sandwiched between two layers of sulfur atoms. A single-molecule layer of molybdenum disulfide is only six-tenths of a nanometer thick. In comparison, the active layer of a silicon microchip is up to about 100 nanometers thick. (A nanometer is a billionth of a meter; the average human hair is about 100,000 nanometers wide.)

Scientists hope two-dimensional materials such as graphene or molybdenite will allow Moore's Law to continue once it becomes impossible to make further progress using silicon. Whereas graphene is an excellent conductor, making it ideal for use in wiring and interconnections, molybdenite is a semiconductor, which means it can serve in the transistor switches that lie at the heart of electronic circuits.

The scientists detailed their findings online April 11 in the journal Nature Communications.

Read More
Close up silicon carbide looks like a pile of grayish brown hexagonal ice crystals

Graphene Photodetector Could Make Sharper Images With Fewer Pixels

While inventors of digital electronic applications are still wrestling with graphene’s lack of a band gap, in optoelectronics the wonder material is more popular than ever. This is no more apparent than in photodetectors, where graphene’s properties as an extreme-broadband absorber enables photodetection for visible, infrared, microwave and terahertz frequencies all while providing very high photo-response speeds.

Despite all this great promise, research has been somewhat limited by the fact the photoresponse only occurs at specific locations on the graphene that represent a relatively small area compared to the entire photodetector.

Now researchers at Purdue University have found a way to work around this limitation, and the result could mean getting sharper images even with fewer photodetector pixels.

Read More
three horizontal blocks of gray and white stripes, the block on top is white with large spaces between the gray stripes, the middle block is gray with thin dark gray stripes, and the bottom block is gray with thin white stripes appearing close together

Keeping Block Copolymers in Line Could Lead to Smaller Microchips

In an effort to keep Moore’s Law going, a team of engineers from MIT, the University of Chicago, and the Argonne National Laboratory has developed a technique to make microchip wire patterns tinier. They’ve accomplished this by making those patterns assemble themselves from a particular type of polymer. With this method, called directed self-assembly, the resulting features are one-quarter the size of features made using today’s chip patterning techniques. Because the technique relies on several tools already commonly used in semiconductor manufacturing, the engineers believe it could easily integrate into the fabrication process.

This research, detailed last week in Nature Nanotechnology, resulted in chip features with a pitch dimension—the distance between the midpoint of any two features—of 18.5 nanometers. Chips in production today are capable of smaller, but this was a proof of concept demonstration.

“We’re not saying they’re the smallest features, by any means, that have been demonstrated,” says Paul Nealey, a University of Chicago professor of molecular engineering who worked on this project.

While the focus is to ultimately create even smaller nanostructures, this experiment focused on refining fabrication methods. “It was really more about perceived integration using semiconductor manufacturing-friendly tools.”

Chris Mack, a lithographer who did not work on the project, thinks this research is important. “I would characterize this as a useful, incremental step,” says Mack. He sees self-assembly as an intriguing solution to the problem of creating smaller chips, because it’s an inexpensive and accessible technique.

This research team’s technique relies on the creation of multiple layers. First, a pattern of 74 nanometer-wide trenches is made using traditional lithographic methods. Lithography is the process of shining patterns of light onto a photosensitive surface. The areas touched by light harden, while the negative space remains soft and gets washed away. Ordinarily, the negative spaces might be filled with copper to form interconnects, but here the hardened pattern then serves as a template for the next layer, a film of a chemical called a block copolymer.

Block copolymers are made of two molecules that want to do different things but are bound together. Mack described the concept using political parties.

“You’ve got one Democrat handcuffed to a Republican. And so you’ve got a whole room full of people like that and they line up so that every Democrat has a Democrat to talk to, because they hate talking to the Republican.”

In this case, the block copolymer forms horizontal layers within the trench, because one part of the copolymer preferred the surface energy (the “political leanings”) of the air interface. But such an arrangement doesn’t make the overall circuit pattern any finer, so Nealey and his team had to find a way to turn the layers vertically. The solution was to add a neutral layer on top of the block copolymer, so neither side is drawn upward more than the other and the trench is filled with vertical layers of polymer. That meant the trench was now filled with 4 narrower trenches.

Karen Gleason, professor of chemical engineering at MIT, came up with a way to deposit this crucial top coat. This method, called initiated chemical vapor deposition (iCVD), deposits the neutral layer from a vapor phase and in the process creates a layer with the same interfacial properties as the block copolymer layer.

The research promises to make directed self-assembly more viable for manufacturing sub-10-nanometer chips, but it’s not quite there yet. Meanwhile, researchers are making headway using other methods as well. To keep pace, Mack says, researchers have to set their goals very high. “In the last 10 years, as researchers have been trying to develop directed self-assembly as a real-life solution to patterning really small features, the needs of the industry keep progressing,” he says.

X-ray topography diffraction measurement device set up for measuring battery discharge rates

The Future of Energy Technology as Seen Through X-Ray Eyes

“Batteries are complicated.” This was the unrehearsed refrain I heard repeatedly from Mike Toney and the researchers who make up the Toney research team at the Stanford Synchrotron Radiation Lightsource (SSRL), part of the SLAC National Accelerator Laboratory in Menlo Park, Calif.

During a visit to to their laboratories, I learned that not only are the complex inner workings of batteries being revealed by the assortment of X-ray microscopy tools used at SSRL, but that the latest innovations in photovoltaics are being examined and characterized with the aim of making sure that both energy storage and energy generation technologies can meet the demands of future generations.

“Characterization is perhaps an underappreciated term,” explains Toney. His take on it?

I see it as more that we’re involved in characterization directed at understanding how things work, or how they're put together. One example of this is batteries, which are presently a quite popular topic. We’re involved in understanding how the lithium ions shuttle back and forth between the anode and cathode and the resulting changes on a very small level. Right now, we are looking at an atom-sized level and how those changes make an impact on the nanoscale level and then eventually understanding how that leads to changes on the electrode level that lead to failure.

While Toney’s team does spend some time working in collaboration with other groups at Stanford—such as the researchers at Yi Cui lab at Stanford Institute for Materials and Energy Sciences—another big part of their research is working with commercial battery and photovoltaics manufacturers who need to know how their devices work on the atomic scale. These measurements are often only possible with the X-rays that are produced when high-energy electrons speed around the synchrotron at SLAC.

Toney calls this work “foundational science research” and involves determining the scientific underpinnings of how batteries and photovoltaics work. This kind of work can take the form of looking at a nanostructure electrode, mixed together with a bunch of carbon in an electrolyte and then reducing that down to a single crystal.

While this structure is not a realistic battery geometry, it does make it possible to simulate a realistic environment in which lithium would move in and out of the nanostructured electrode and reaction layers would form. Combining it with highly sensitive X-ray probes makes it possible to acquire detailed information on how the battery would operate in a real-world conditions. This makes it possible to determine what kinds of charging protocols or chemistries in the electrolyte can be used to basically encapsulate—or passivate—the surface. 

Chris Takacs, a post-doc research fellow at SLAC who is a member of Toney’s team, has been testing one company’s batteries using one of these small battery packages and the X-rays from SLAC’s synchrotron. Takacs has devised a special measurement technique using X-ray topography diffraction that he has dubbed depth-resolved X-ray diffraction.

You can see Takacs describe his measurement arrangement in the video below.

“We’re trying to understand how Li-ion concentration gradients build up on commercial battery cells as you charge and discharge them fast,” says Takacs. “We are trying to uncover the limits in these types of performances. So, right now we're just looking at the cathode. We're trying to understand if there is an enrichment of the lithium ions near the separator or near the current collector when you’re charging very quickly. This is considered one of the major limitations for the rate at which you charge.”

One of the most promising materials for improving the charge life of Li-ion batteries has been nanostructured silicon.  Silicon has been discovered to improve the charge capacity of anodes (negative electrodes) in lithium-ion batteries by as much as ten times over standard graphite-based anodes. Unfortunately, silicon anodes crack and become unusable after a few charge/discharge cycles. This occurs because the material swells and shrinks as the ions shuttle back and forth. It has been hoped that nanostructuring the silicon can reduce or eliminate this rapid cracking. Toney and his team are looking at this problem and others assocated with silicon in batteries.

“One of the problems with silicon is you grow this all-electrolyte interface right at the surface of the silicon that consumes the electrolyte and consumes lithium,” says Toney. “If that grows uncontrollably, all the lithium ends up there and not in silicon where you want it. So we want a passivating layer there that is compliant in the sense that it can stretch. And so we’re in the process of providing some knowledge to guide people and get other researchers to think about what additives we want to add to electrolyte to kind of tune the properties of this layer.”

While polymers have been used as a passivating layer, Toney says that you can also tune the reactions that form to serve as an extremely effective passivating layer.

A great example of this is not in the battery space, but in stainless steel manufacturing, where the the alloying elements are designed to create a passive film on the surface that basically prevents the stainless steel from corroding and gives long life, according to Toney. “This passivating layer has allowed stainless steel to be used ubiquitously in our society,” he adds.

The Li-ion battery has achieved the same ubiquity. As a result, battery research has continued to take on an ever greater share of Toney’s research. He marks the beginning of this surge in battery research with the introduction of the Tesla all-electric vehicles.

“If you had asked me 10 years ago if electric vehicles were possible at a reasonable price, I think I probably wouldn't have believed it,” says Toney. “But once you start to see the first of the Teslas that came out, you start to realize these are real.  And that is kind of about the same time that I think a lot of people start to get interested in battery technology.”

The Li-ion batteries that are currently used in Tesla cars still have some room for improvement, according to Toney. But those improvements will likely remain incremental.

“Over the next few years, we we will continue to see the same incremental five- to 10-percent per year advancements in terms of capacities,” says Toney.  “The costs actually have been going down much faster than that. So I would expect that some of those will continue over the next few years, but at some point—at least from the cost perspective—you’re going to hit a limit. There are some predictions that this will happen in two or three years. You really can’t get a battery much cheaper with the current chemistries.”

Alternative chemistries to the current dominant Li-ion battery is another line of research that has taken on increasing importance. Most notable among these is lithium-metal batteries. Johanna Nelson Weker, a staff scientist at SSRL, has been examining these chemistries with the aid of a transmission X-ray microscope.

With this device, Weker essentially takes images of anodes in lithium-metal batteries in situ across a 30-micron field of view and is able to get resolutions down to 30 nanometers.

“We can either look at how their chemistry is changing or how their morphology is changing in 3D,” says Weker. “So 3D imaging is very much like a computerized tomography (CT) scan. We rotate our sample, take images at many different angles, put them into an algorithm, and out comes a 3D image.”

In what is essentially spectral micrscopy, the researchers are to see what all the elements in the battery are doing spatially while it’s cycling. “For example, it will show you whether your lithium is going into your cathode in the core-shell manner—from the outside in—or is it going to smaller particles first and ignoring the large particles. Or it will show you whether every particle just transforming simultaneously together both in both large and small particles,” explains Weker.

In the video below, you can see Weker describe what they discovered when looking at how these particles lithiate in a lithium-metal battery sample.

Weker’s work has revealed that a long held assumption about how these particles lithiate was wrong. The core shell of these particles does not always lithiate.

“What we found with lithium-iron phosphate is it doesn't act that way,” she explains. “In the nanoparticles, there's a preferential direction depending not only on the crystalline lattice planes, but also basically one particle will start to lithiate. And all the neighboring particles will not and they actually donate their lithium to that particle so the lithiation occurs one particle at a time.”

Not all the members of Toney’s teams are using X-rays for examining energy storage technologies. Some are looking at the cutting edge materials used in photovoltaics, such as halide perovskites.

Aryeh (Ari) Gold-Parker, a PhD student at Stanford working with Toney’s team, is employing an X-ray absorption spectroscopy technique to identify certain elements that are in the sample and identify the relative quantities of those elements in order to understand something about the chemical environment in which the atoms of those elements reside.

The starting material for halide perovskites involves quite a bit of chlorine in addition to iodine, but it's widely understood that once you’ve fully prepared the film by heating it, almost all of the chlorine is gone. So what you're left with is primarily iodine. What Gold-Parker did was to prepare the film while heating it and keep it on the X-ray beam line.

This arrangement allowed him to monitor the chlorine leaving the film as it was being heated and also discover the fact that the chlorine atoms in the film were going from one local environment to another throughout the heating process. Gold-Parker believes that this will prove extremely useful for trying to understand how these perovskite films form and might eventually help to optimize the actual solar cell performance.

“At the end of the day, the real hope is to make high-efficiency devices,” says Gold-Parker. “There are a lot of engineers who are fine with all these different chemical compositions for this material system, just swapping in all different elements and molecules in the different sites trying to achieve the highest efficiency. But the field has gotten way ahead of the basic science. So the engineers might be two or three years ahead of the actual understanding of why these atoms are making a difference.”

This is what Toney might describe with his term “foundational science research”. Instead of hit-and-miss iterative processes, Toney and his team are trying to uncover the fundamental chemistry and physics that make our next-generation energy storage and generation systems operate. This will likely make them better in just about every performance metric and also make them cheaper to produce.

Four of the nanofabricated silica chips that enable the "Accelerator-on-a-Chip" technology

Nanofabrication Enables "Particle-Accelerator-on-a-Chip" Technology

About 15 months ago, The Gordon and Betty Moore Foundation awarded US $13.5 million to a five-year project involving an international collection of universities and national labs to start work on shrinking particle accelerators so that they could fit on a chip. The project, dubbed “Accelerator on a Chip” could have a profound impact on both fundamental science research and medicine.

In a nutshell, the aim is to use lasers and a piece of nanostructured silicon or glass about the size of a grain of rice to accelerate electrons at a rate up to 30 times higher than typical values for conventional technology. The resulting technology could potentially match the power of SLAC's 3.2-kilometer-long linear accelerator in as little as 100 meters.

In a visit to the offices of Joel England, a fellow at the Advanced Accelerator Research Department at SLAC and one of the leaders of the Accelerator-on-a-Chip project, we got some insights into the science and technology involved  and how it is proceeding.

Read More
Stanford researcher Hao Yan presents models of the diamondoid molecule

Diamondoids on Verge of Key Application Breakthroughs

Ever since Jeremy Dahl of Stanford University first isolated the molecules known as “diamondoids” from crude oil in 2003,  the material science community has been fascinated with their potential. Diamondoids, which are both the smallest and purest form of diamond, have a unique set of properties that have led scientists to consider their use in applications including quantum computation and enabling so-called diamondoid mechanosynthesis (DMS), in which diamondoid structures are built using a programmable molecular positioning approach.

While DMS may still be a long ways off, researchers at Stanford and the U.S. Department of Energy’s SLAC National Accelerator Laboratory in Menlo Park are continuing to experiment with the material. They have developed more short-term applications for the molecule, while still keeping their eyes on the horizon for its long-term potential.

Hao Yan, a postdoc with the Melosh Research Group at the Stanford Institute for Materials and Energy Sciences (SIMES), reported late last year in the journal Nature Materials that diamondoids could help control the self-assembly of nanowires, promising a kind of template for creating a variety of materials with unique properties.

Read More
An image of the inside of Intel's D1X research fab in Hillsboro, Oregon

Intel Now Packs 100 Million Transistors in Each Square Millimeter

I’ll admit it: journalists like milestones. Nice round numbers and anniversaries make for good headlines. So my ears certainly perked up on Tuesday when Intel said that it can now pack more than 100 million transistors in each square millimeter of chip “for the first time in our industry’s history,” said Kaizad Mistry, a vice president and co-director of logic technology at the company. Delivering more transistors in the same area means the circuitry can be made smaller, saving on cost, or it means that more functionality can be added to a chip without having to make it bigger. 

The news came during Intel’s Technology and Manufacturing Day, a behind-the-scenes look at the company’s latest chip classes and packaging technology, and another opportunity for the chipmaker to declare that Moore’s Law is alive and well—at least for Intel. 

The nice round 100 million milestone (100.8 million, to be exact) belongs to Intel’s latest-and-greatest chip generation: 10 nanometers. For those uninitiated in semiconductor lingo, the 10 nm designation is a reference to the “node” or manufacturing technology used to make such chips. As a general rule, the smaller the number, the denser the circuitry. But even though node names look like measurements, today the numbers don’t really correspond to the size of any particular feature and there can be significant variation between companies.

When I wrote about Intel’s 10-nm plans in our January issue previewing the coming year in technology, the company was not yet ready to say much publicly about the specific dimensions of the transistors. This week, they were more forthcoming with figures [pdf]: it’s 34 nm from one fin to the next in the company’s FinFET transistors and 36 nm from one wire to the next in the most dense interconnect layers (down from 42 nm and 52 nm, respectively, in the previous, 14-nm chip generation).

Shorter distances such as these mean that 10nm chips can pack significantly more transistors in a given area. The 100 million density figure comes from a metric that Intel senior fellow Mark Bohr has proposed the industry resurrect, in order to better compare chipmaker offerings. Instead of measuring a chip manufacturing generation by an area taken up by a certain component or set of components, Bohr proposes that we instead measure chip generations by their transistor density—in particular, by the number spit out by an equation that combines the transistor density in a standard 2-input NAND cell and a scan flip-flop logic cell. 

And by that metric, Bohr says, Intel has more than doubled its transistor density in recent years. From 22nm to 14nm, the transistor density jumped by a factor of 2.5x. And in the move from 14-nm to 10-nm chip manufacturing technology, the jump was 2.7x, from 37.5 million transistors per square millimeter to more than 100 million. Crucially, the company says, the 10-nm transistors have the capacity for higher speed and greater energy efficiency than their predecessors (although, when I spoke with Bohr late last year, he said the focus lately has been on the latter). 

It remains to be seen whether the industry will agree that the new metric is a meaningful one. In comments to EE Times, one analyst said transistor count over a larger area, closer to the size of a real chip, would be a more relevant metric. And an unnamed spokesperson from rival chipmaker TSMC told the site: “I have no idea how Intel does its new calculation...for example, its [first-generation 14nm CPU] Broadwell used to have 18.4 million transistors per mm squared, yet under the new measure it suddenly has 37.5 million transistors per mm2. Are they trying to play paper games?”

Not so fast, says Intel. After this story originally posted, a company representative wrote to IEEE Spectrum to stress the difference between talking about a chip’s transistor density and this metric, which is designed to assess the capabilities of a manufacturing node. “Simply taking the total transistor count of a chip and dividing by its area is not meaningful because of the large number of design decisions that can affect it–factors such as cache sizes and performance targets can cause great variations in this value,” Bohr wrote in his proposal, titled “Let’s Clear Up the Node Naming Mess.”

Even if there is room to quibble over that specific 100 million figure, Intel is also saying that it is more than doubling transistor density with each new chip generation—and that this more aggressive level of miniaturization helps to counteract the slower cadence that has recently set in with respect to the introduction of each new generation. On balance, Intel said, the company is still on a pace that roughly corresponds to a doubling of transistor density every couple of years.

Intel calls the suite of strategies it uses to accomplish this more-than-doubling “hyperscaling.” It includes design improvements, but a big piece is the company’s approach to laying down the patterns that ultimately become the chip’s transistors and wiring, which Intel fellow Ruth Brain outlined in her talk [pdf]. 

With its 14-nm chips, Brain said, Intel began using a strategy called self-aligned double patterning (SADP). SADP is a form of multiple patterning, a range of strategies that can be used to make chip features much smaller than the 193-nm light that is used to print them by splitting the patterning process into multiple steps.

Other companies, Brain said, use a simple multiple patterning approach that essentially prints the same pattern multiple times, offset slightly. But that technique relies on a lithography machine’s ability to pinpoint the same spot for each exposure, and variability in this process can degrade chip performance and lower the number of usable chips produced. SADP splits up the patterning in a different way, to sidestep this “overlay” issue.

With 10-nm chips, Intel as adopted self-aligned quadruple patterning (SAQP), a similar approach that requires four passes through a lithography machine. Mistry says SAQP has one more generation in it, which would take Intel down to the feature sizes needed to produce the next generation: 7 nm. 

Somewhere in there, we may just see extreme ultraviolet (EUV) lithography enter the picture. EUV uses 13.5-nm radiation (pretty much X-rays) instead of 193-nm ultraviolet light for feature patterning.

But back to the present and that 100 million transistors per square millimeter figure. It’s easy to underplay the engineering feats that go into making that sort of milestone (assuming it stands the test of time) possible. “You know one of the remarkable things about Moore’s Law is that Moore’s Law’s past seems preordained and ordinary, and Moore’s Law’s future is difficult and requires inventions,” Mistry told IEEE Spectrum. 

Now, he says, the FinFET transistor seems par for the course, but it wasn’t when Intel introduced the technology in 2011. “All these things are difficult, but once they’re done they seem normal,” he adds. “And that’s the magic of Moore’s Law.”

This article was updated on 31 March to add a response from Intel to the comments from TSMC and to correct a caption.

Chong Liu of Stanford University

Nanostructures Move From Water Purification to Uranium Extraction

Last August, we reported on work out of the U.S. Department of Energy’s SLAC National Accelerator Laboratory and Stanford University in which the nanomaterial molybdenum disulfide was used to kill 99.999 percent of bacteria in water within just 20 minutes—a process that would otherwise take up to two days if only the ultraviolet (UV) light from the sun were used as a disinfectant.

In a meeting last week with Chong Liu, the post-doc in Yi Cui’s lab at Stanford who was the lead author of that research, it appears water purification is just the start for the capabilities of this line of research that has had a number of incarnations.

Read More
Advertisement

Nanoclast

IEEE Spectrum’s nanotechnology blog, featuring news and analysis about the development, applications, and future of science and technology at the nanoscale.

 
Editor
Dexter Johnson
Madrid, Spain
Contributor
Rachel Courtland
New York City
 
Load More