When it comes to developing new ways to fabricate microchips, the best approach may be not to rely on either human- or computer-developed designs alone but instead a blending of the two. Such collaborations might reduce costs by half compared to depending on human experts alone, a new study finds.
“While humans are still essential due to their expertise and ability to solve challenging, out-of-the-box problems, our findings show where the ‘human first, computer last’ strategy can help address the tedious aspects of process development, thereby significantly speeding up innovation,” says study senior author Richard Gottscho, executive vice president and strategic advisor to the CEO at Lam Research Corp. in Fremont, Calif. “As chipmakers look to conquer the many challenges associated with scaling 3D NAND, FinFETS, DRAM and other devices, the implications are really exciting.”
Currently, one of the bottlenecks to building microchips is the growing cost of developing the semiconductor processes that fabricate transistors and memory cells. These complicated processes, each involving hundreds of steps, are still conceived manually by highly trained engineers.
The way in which artificial intelligence (AI) can outperform humans at complex tasks—for instance, board games such as chess and Go—suggests that computer algorithms might also help develop semiconductor processes. However, to beat people at board games, computers were trained off a large amount of inexpensive data. In contrast, generating semiconductor process data is costly. Individual experiments can run more than a thousand dollars each, due to the costs of the materials, equipment, and analytical tools.
The high costs of semiconductor experiments mean that engineers typically develop semiconductor processes by testing on the order of just a hundred different combinations, or “recipes,” of parameters—for instance, plasma pressure and wafer temperature—for the machines that manufacture the devices. This limited data makes it difficult to create a predictive model with good accuracy down to the atomic scale.
“When creating a memory hole in a 3D NAND device or etching another device feature, process engineers are faced with well over a hundred trillion different possible recipes for high-aspect-ratio etching,” Gottscho says. “This number is simply overwhelming.”
In the new study, the researchers investigated how AI might reduce the cost of developing semiconductor processes. Specifically, they explored optimization algorithms based on the statistical approach known as Bayesian reasoning, in which prior knowledge helps compute the chances that an uncertain choice might be correct. Bayesian optimization algorithms can prove effective when there is scarce data, and scientists have previously investigated their use for other applications in the semiconductor industry.
To see if machines might do better than humans at this task, the scientists created a way to systematically benchmark their performance against each other. Inspired by computer advances in chess and Go, study lead author Keren Kanarik, technical managing director at Lam Research, suggested developing a game as a test bed for comparisons.
In experiments, players worked on a lab simulator where they were asked to etch a memory hole, which is a trench in a silicon dioxide film that is used to create a memory cell. The goal was to use as little money as possible to find a recipe to produce a memory hole with a specific depth, width, and shape.
The players were three computer algorithms; three human senior engineers with doctorates who each had more than seven years of experience; three human junior engineers with doctorates who each had less than one year of experience; and three human volunteers with no knowledge about semiconductor processes. At the end of each round, players submitted a batch of one or more recipes. Each recipe cost US $1,000 for wafer and measurement costs, and each batch cost $1,000 for tool operation.
The best player was a human senior engineer, who produced the requested memory holes after a total cost of $105,000. Only 13 out of 300 computer attempts—less than 5 percent—beat this human expert. All in all, the scientists found the human senior engineers required roughly half the cost of the human junior engineers for the same amount of progress.
The scientists running the game found that the work from each human engineer was split into two stages. In the initial rough-tuning stage, they displayed rapid improvement toward meeting the target, and in the later fine-tuning stage, they made slow progress to meet all the desired goals simultaneously.
The researchers suggested the computer algorithms failed because they lacked expert knowledge and so wasted experiments navigating the vast number of possibilities. Therefore, they tested a strategy where the best player guided the algorithms in a ‘human first, computer last’ scenario. They found this hybrid approach could reach the target with just $52,000, just under half the cost of the human expert alone.
The new study reveals human engineers may excel in the early stage of rough tuning, when they can draw on their experience and intuition. Computer algorithms may prove far more cost-efficient in the later stage of fine-tuning when striving to reach precise targets.
“This research reinforces the importance and intrinsic value of human engineers and human ingenuity but also shows us a way to amplify those benefits while reducing the less rewarding aspects of engineering by giving them to machines well suited to the task,” Gottscho says. “You can take the best of what humans have to offer and the best of what data science and machines offer, put them together, and create a combination that performs better than either one alone.”
The scientists note future research can systematically investigate when the best points are to hand off human work for computers to complete. They add there will likely also be cultural challenges in partnering humans with computers. For instance, the study found that while human engineers often change just one or two parameters from experiment to experiment, computers may alter more without explanation, and humans may find it difficult to accept recipes they do not understand.
“AI and computers process information in a way that is counter-intuitive for most humans,” Gottscho says. “For the unconventional ‘human first, computer last’ approach to be successful, process engineers will need to resist intervening in the machine process. This may require a change in human behavior. Better understanding of the AI approach may help engineers trust the machine findings and, ultimately, lead to greater potential opportunities for utilizing computer algorithms in the future.”
The scientists detailed their findings online 8 March in the journal Nature.
- MIT's Superefficient Dispatching Algorithm Minimizes a City's Taxi ... ›
- Smart Algorithm Bursts Social Networks' "Filter Bubbles" - IEEE ... ›
- Optical Algorithm Simplifies Analog AI Training - IEEE Spectrum ›