Tech Talk iconTech Talk

A plot of the physical gate length of transistors, which could stop getting smaller as early as 2021.

Transistors Will Stop Shrinking in 2021, Moore’s Law Roadmap Predicts

After more than 50 years of miniaturization, the transistor could stop shrinking in just five years. That is the prediction of the 2015 International Technology Roadmap for Semiconductors, which was officially released earlier this month. 

After 2021, the report forecasts, it will no longer be economically desirable for companies to continue to shrink the dimensions of transistors in microprocessors. Instead, chip manufacturers will turn to other means of boosting density, namely turning the transistor from a horizontal to a vertical geometry and building multiple layers of circuitry, one on top of another. 

For some, this change will likely be interpreted as another death knell for Moore’s Law, the repeated doubling of transistor densities that has given us the extraordinarily capable computers we have today. Compounding the drama is the fact that this is the last ITRS roadmap, the end to a more-than-20-year-old coordinated planning effort that began in the United States and was then expanded to include the rest of the world.

Citing waning industry participation and an interest in pursuing other initiatives, the Semiconductor Industry Association—a U.S. trade group that represents the interests of IBM, Intel, and other companies in Washington and a key ITRS sponsor—will do its own work, in collaboration with another industry group, the Semiconductor Research Corporation, to identify research priorities for government- and industry-sponsored programs. Other ITRS participants are expected to continue on with a new roadmapping effort under a new name, which will be conducted as part of an IEEE initiative called Rebooting Computing

These roadmapping shifts may seem like trivial administrative changes. But “this is a major disruption, or earthquake, in the industry,” says analyst Dan Hutcheson, of the firm VLSI Research. U.S. semiconductor companies had reason to cooperate and identify common needs in the early 1990’s, at the outset of the roadmapping effort that eventually led to the ITRS’s creation in 1998. Suppliers had a hard time identifying what the semiconductor companies needed, he says, and it made sense for chip companies to collectively set priorities to make the most of limited R&D funding. 

But the difficulty and expense associated with maintaining the leading edge of Moore’s Law has since resulted in significant consolidation. By Hutcheson’s count, 19 companies were developing and manufacturing logic chips with leading-edge transistors in 2001. Today, there are just four: Intel, TSMC, Samsung, and GlobalFoundries. (Until recently, IBM was also part of that cohort, but its chip fabrication plants were sold to GlobalFoundries.)

These companies have their own roadmaps and can communicate directly to their equipment and materials suppliers, Hutcheson says. What’s more, they’re fiercely competitive. “They don’t want to sit in a room and talk about what their needs are,” Hutcheson says. “It’s sort of like everything’s fun and games when you start off at the beginning of the football season, but by the time you get down to the playoffs it’s pretty rough.”

“The industry has changed,” agrees Paolo Gargini, chair of the ITRS, but he highlights other shifts. Semiconductor companies that no longer make leading-edge chips in house rely on the foundries that make their chips to provide advanced technologies. What’s more, he says, chip buyers and designers—companies such as Apple, Google, and Qualcomm—are increasingly dictating the requirements for future chip generations. “Once upon a time,” Gargini says, “the semiconductor companies decided what the semiconductor features were supposed to be. This is no longer the case.”

This final ITRS report is titled ITRS 2.0. The name reflects the idea that improvements in computing are no longer driven from the bottom-up, by tinier switches and denser or faster memories. Instead, it takes a more top-down approach, focusing on the applications that now drive chip design, such as data centers, the Internet of Things, and mobile gadgets. 

The new IEEE roadmap—the International Roadmap for Devices and Systems—will also take this approach, but it will add computer architecture to the mix, allowing for “a comprehensive, end-to-end view of the computing ecosystem, including devices, components, systems, architecture, and software,” according to a recent press release.

Transistor miniaturization was still a part of the long-term forecast as recently as 2014, when the penultimate ITRS report was released. That report predicted that the physical gate length of transistors—an indicator of how far current must travel in the device—and other key logic chip dimensions would continue to shrink until at least 2028. But since then, 3D concepts have gained momentum. The memory industry has already turned to 3D architectures to ease miniaturization pressure and boost the capacity of NAND Flash. Monolithic 3D integration, which would build layers of devices one on top of another, connecting them with a dense forest of wires, has also been an increasingly popular subject of discussion.

The new report embraces these trends, predicting an end to traditional scaling—the shrinking of chip features—by the early 2020’s. But the idea that we’re now facing an end to Moore’s Law “is completely wrong,” Gargini says. “The press has invented multiple ways of defining Moore’s Law but there is only one way: The number of transistors doubles every two years.” 

Moore’s Law, he emphasizes, is simply a prediction about how many transistors can fit in a given area of IC—whether it’s done, as it has been for decades, in a single layer or by stacking multiple layers. If a company really wanted to, Gargini says, it could continue to make transistors smaller well into the 2020s, “but it’s more economic to go 3-D. That’s the message we wanted to send.” 

There are other changes on the horizon. In the coming years, before 3-D integration is adopted, the ITRS predicts that leading-edge chip companies will move away from the transistor structure used now in high-performance chips: the FinFET. This device has a gate draped around three sides of a horizontal, fin-shaped channel to control the flow of current. According to the roadmap, chipmakers will leave that in favor of a lateral, gate-all-around device that has a horizontal channel like the FinFET but is surrounded by a gate that extends underneath as well. After that, transistors will become vertical, with their channels taking the form of pillars or nanowires standing up on end. The traditional silicon channel will also be replaced by channels made with alternate materials, namely silicon germanium, germanium, and compounds drawn from columns III and V of the old periodic table.

These changes will allow companies to pack more transistors in a given area and so adhere to the letter of Moore’s Law. But keeping to the spirit of Moore’s Law—the steady improvement in computing performance—is another matter. 

The doubling of transistor densities hasn’t been linked to improvements in computing performance for some time, notes Tom Conte, the 2015 president of the IEEE Computer Society and a co-leader of the IEEE Rebooting Computing Initiative.

For a long time, shrinking transistors meant faster speeds. But in the mid-1990’s, Conte says, the extra metal layers that were added to wire up increasing numbers of transistors were adding significant delays, and engineers redesigned chip microarchitectures to improve performance. A decade later, transistor densities were so high that their heat limited clock speeds. Companies began packing multiple cores on chips to keep things moving. 

“We’ve been living in this bubble where the computing industry could rely on the device side to do their job, and so the computer industry and the device industry really had this very nice wall between them,” says Conte. “That wall really started to crumble in 2005, and since that time we’ve been getting more transistors but they’re really not all that much better.” 

This crumbling wall was a strong motivation for the IEEE Rebooting Computing Initiative to begin collaborating with the ITRS last year, before the launch of the IRDS. “I like to say we could see the light at the end of the tunnel, and we knew it was an oncoming train,” says Conte.

The initiative held a summit last December that covered a gamut of potential future computing technologies, including new kinds of transistors and memory devices, neuromorphic computing, superconducting circuitry, and processors that use approximate instead of exact answers.

The first international Rebooting Computing conference will be held in October this year; IRDS meetings will coincide with such events, Conte says. The IRDS will still track “Moore’s Law to the bitter end,” Conte explains. But the roadmapping focus has changed: “This isn’t saying this is the end of Moore’s Law,” he says. “It’s stepping back and saying what really matters here—and what really matters here is computing.”

A fork bent into the shape of a hand doing a thumb's up.

“Hard Fork” Coming to Restore Ethereum Funds to Investors of Hacked DAO

It’s been a little over a month since The DAO, the most visible and well-funded application on the Ethereum network, crashed and burned at the hands of an anonymous hacker. Until its spectacular demise this June, The DAO was a centerpiece of the Ethereum ecosystem. It was intended to operate as a decentralized, automated investment fund, which at one point had over US $150 million worth of the cryptocurrency Ether at its disposal. Had it succeeded, it would have been a shining early example of Ethereum’s potential as a smart contract platform, adding credibility to the thoroughly-hyped idea that blockchains are poised to replace our legal, political, and social frameworks with code

Those dreams were smashed on 17 June when a hacker exploited a flaw in The DAO and drained it of some 3.6 million Ethers. Due to the nature of The DAO’s underlying structure however, these funds are stuck and cannot be used in a transaction until 28 July, meaning the hacker has no way of cashing out for another nine days. 

Provided with this handy deadline, the Ethereum community has spent the last month passionately debating whether it is possible, and more importantly, whether it is advisable to make investors whole again. In many ways, it is a lose, lose situation. If no action is taken, then there is a good chance that legal action will ensue—initiated either by investors in The DAO or by federal regulators, who are already sniffing at the hem of the blockchain.

Read More
Photo of IEEE Life Fellow Charles Kao in the lab

How Charles Kao Beat Bell Labs to the Fiber-Optic Revolution

Fifty years ago this month, a 32-year-old Chinese-born research engineer named Charles Kao published a milestone paper that set off the entire field of fiber-optic communications and eventually earned him a share of the 2009 Nobel Prize in Physics. The story of how that 1966 paper came to be is a wonderful example of how the key to a big technological breakthrough can come down to asking the right question.

Read More

DARPA Challenge Tests AI as Cybersecurity Defenders

Today’s malicious hackers have an average of 312 days to exploit “zero-day” computer software flaws before human cybersecurity experts can find and fix those flaws. The U.S. military’s main research agency focused on disruptive technologies aims to see whether artificial intelligence can do a better job of finding and fixing such exploits within a matter of seconds or minutes.

Read More
Mostly white-garbed pilgrims during the Hajj, in Saudi Arabia.

Hajj Pilgrims to Get Electronic Bracelets to Prevent a Repeat of 2015 Stampede

A stampede that killed hundreds and perhaps thousands of Hajj pilgrims made the 2015 disaster the deadliest on record for the world’s largest Islamic gathering. Hoping to prevent another accident this September, officials plan to issue electronic bracelets to guide and keep track of the millions of pilgrims expected to visit Islam’s holy sites at Mecca in Saudi Arabia.

The Saudi Press Agency described the water-resistant bracelets as GPS-linked devices containing personal identification and medical records that Saudi officials and security forces could access via smartphone, according to BBC News. Personal information on each pilgrim would include passport numbers and addresses. In addition, Saudi officials have installed 1,000 new surveillance cameras to keep an eye on the pilgrims as they walk along pilgrimage routes and crowd inside holy sites.

Read More

For Best Results, Send Molecular Messages Using MIMO

Just as cell phones propagate radio waves to connect users, it’s also possible to transmit messages by emitting molecules. A sender can compose a message by diffusing bursts of a specific type of molecule, so long as a recipient can detect that molecule and interpret the pattern. Using this method, nanodevices could create a digital signaling system within the body, a locale where radio waves are quickly absorbed and where there is little space for bulky antennas.

Neurons and other cells in the body already communicate through the transfer of neurotransmitters, hormones, and other signaling molecules. And researchers have shown in early experiments that molecular digital communications through the air works but only with a fairly low data rate. Now, a team from Toronto’s York University and Yonsei University in Seoul, South Korea found that they can nearly double that data rate by applying multiple-input multiple-output (MIMO) technology.

Read More
Adam Savage pedal-powered beest walking machine

Watch Adam Savage’s Pedal-Powered Beest Machine Take Its First Steps

Adam Savage, the former co-host of Discovery Channel’s popular television show MythBusters, is accustomed to testing the limits of human ingenuity. Do you remember when he and his team of tinkerers tested the tensile strength of duct tape by suspending a car with it? After that episode, I never looked at my ratty duct tape the same way.

Since MythBusters ended earlier this year, Savage has had some extra time on his hands. Which in the case of an avid designer and maker like Savage means spending three days hunkered down at the Exploratorium museum in San Francisco building his latest creation. He called it the “Pedal-Powered Beest.” And the thing does look beastly, except for the bright-red All Star sneakers pinned to its 12 feet.

Read More

Use a GPU to Turn a PC Into a Supercomputer

As Moore’s Law slows for CPUs, dedicated graphics co-processors are picking up some of the slack. Just as GPUs are changing the game in deep learning and autonomous cars, the GPU-powered desktop PC might even begin to keep pace with the conventional supercomputer for a portion of supercomputer applications. 

For instance, a group of Russian scientists are reporting this month that they’ve been able to solve computational problems in nuclear physics using an off-the-shelf, high-end PC containing a GPU. And, they say, after fine-tuning their algorithm for GPUs, they were able to run their calculations faster than the traditional, CPU-powered supercomputer their colleagues use. Bonus: they ran those calculations for free as opposed to their colleagues, who must pay for access to the supercomputer to run their computations.

Read More

Your Smart Watch Can Steal Your ATM PIN

Mobile systems and cyber security expert Yan Wang doesn’t wear a smart watch.

“It knows too much,” says Wang, an assistant professor of computer science at Binghamton University in Upstate New York. “If you are using a smart watch, you need to be cautious.”

Wearable devices can give away your PIN number, according to research by professor Yingying Chen at Stevens Institute of Technology and three of her current and former graduate students including Wang. By combining smart watch sensor data with an algorithm to infer key entry sequences from even the smallest of hand movements, the team was able to crack private ATM PINs with 80 percent accuracy on the first try and more than 90 percent accuracy after three tries.

“This was surprising even to those of us already working in this area,” Chen, who led the research, said in a press release. “It may be easier than we think for criminals to obtain secret and private information from our wearables by using the right techniques.” 

Read More
Photos: University of Washington

One Million Faces Challenge Even the Best Facial Recognition Algorithms

Helen of Troy may have had the face that launched a thousand ships, but even the best facial recognition algorithms may have had trouble finding her face in a crowd of one million strangers. The first benchmark test based on one million faces has shown how facial recognition algorithms from Google and other research groups around the world can still fall short in accurately identifying and verifying faces.

Facial recognition algorithms that had previously performed with more than 95 percent accuracy on a popular benchmark test involving 13,000 faces saw significant drops in accuracy when faced with the new MegaFace Challenge  involving one million faces. The best performer on one test, Google’s FaceNet algorithm, dropped from near-perfect accuracy on five-figure datasets to 75 percent on the million-face test. Other top algorithms dropped from above 90-percent accuracy on the small datasets to below 60 percent on the MegaFace Challenge. Some algorithms made the proper identification as seldom as 35 percent of the time.

“Megaface's key idea is that algorithms should be evaluated at large scale,” says Ira Kemelmacher-Shlizerman, an assistant professor of computer science at the University of Washington in Seattle and the project’s principal investigator. “And we make a number of discoveries that are only possible when evaluating at scale.”

Read More

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More