Intel Labs Director Talks Quantum, Probabilistic, and Neuromorphic Computing

Rich Uhlig, who took over Intel Labs late last year, discusses Intel’s vision for the future of computing

6 min read

A photo shows the exterior of Intel's headquarters with the company's logo on prominent display.
Photo: iStockphoto

Intel has done pretty well for itself by consistently figuring out ways of making CPUs faster and more efficient. But with the end of Moore’s Law lurking on the horizon, Intel has been exploring ways of extending computing with innovative new architectures at Intel Labs.

Quantum computing is one of these initiatives, and Intel Labs has been testing its own 49-qubit processors. Beyond that, Intel Labs is exploring neuromorphic computing (emulating the structure and, hopefully, some of the functionality of the human brain with artificial neural networks) as well as probabilistic computing, which is intended to help address the need to quantify uncertainty in artificial intelligence applications.

Rich Uhlig has been the director of Intel Labs since December of 2018, which is really not all that long, but he’s been at Intel since 1996 (most recently as Director of Systems and Software Research for Intel Labs) so he seems well qualified to hit the ground running. We spoke with Uhlig about quantum, neuromorphic, and probabilistic computing, how these systems will help us manage AI, and what kinds of things these technologies will make possible that should concern us at least a little bit.

IEEE Spectrum: According to Intel’s timeline of quantum computing, we’re currently in the “system phase.” What does that mean, and how will we transition to the commercial phase?

A photo of Rich Uhlig, director of Intel LabsRich Uhlig, director of Intel Labs.Photo: Courtesy of Intel Labs

Rich Uhlig: At Intel, we’re focused on developing a commercially viable quantum computer, which will require more than the qubits themselves. We have successfully manufactured a 49-qubit superconducting chip, which allows us to begin integrating the quantum processing unit (the QPU) into a system where we can build all of the components that will be required to make the qubits work together in tandem to improve efficiency and scalability. Instead of focusing on the hype of qubit count, we are working to create a viable quantum system that will scale from 50 qubits to the millions of qubits that will be required for a commercial system.

What’s so great about the brain that we want to mimic it with neuromorphic computing?

What’s fascinating about the brain is that it processes highly complex information in real time, and does so with very little energy. Our goal is not necessarily to mimic the brain but to understand the principles that give the brain such impressive and efficient functionality, and then to apply those principles to chips we can build. Many of those principles—relating to fine-grained parallelism, computing with dynamics, temporal coding of information, event-driven operation, and many others—directly inspire new features, architectures, and algorithms that we believe will lead to breakthrough gains in both the capabilities and efficiency of computing systems.

Why is probabilistic computing important enough to Intel to be listed alongside quantum and neuromorphic?

Probabilistic computing allows us to deal with uncertainty in natural data around us as well as to predict events in the world with an understanding of data and model uncertainty. Predicting what will happen next in a scenario, as well as effects of our actions, can only be done if we know how to model the world around us with probability distributions. Having uncertainty measures provided by augmenting deep learning with probabilistic methods opens the door to understanding why AI systems make the decisions they make, which will help with issues like tackling bias in AI systems.

Our research into probabilistic computing is really about establishing a new way to evaluate the performance of the next wave of artificial intelligence—one that requires real-time assessment of “noisy” data. The first AI systems focused on logic: pre-programed rules. The second wave of AI aims to progress the ability to sense and perceive information, leveraging neural networks to learn over time. But, neither of these solutions can do the things that human beings do naturally as we navigate the world. They can’t think through multiple potential scenarios based on data that you have on-hand while conscious of potential data that you don’t have.

One example for why this concept is so important is if you are driving a car and see a soccer ball roll into the street, your immediate and natural reaction is to stop the car since we can assume a child is running after the ball and isn’t far behind.

The driver reaches the decision to stop the car based on experience of natural data and assumptions about human behavior. But, a traditional computer likely wouldn’t reach the same conclusion in real-time, because today’s systems are not programmed to mine noisy data efficiently and to make decisions based on environmental awareness. But, for an application like autonomous driving, you would want a probabilistic system calling the shots—one that could quickly assess the situation and act (stop the car) immediately.

Will you need new kinds of devices for neuromorphic and probabilistic computing? What kinds of properties do these need to have?

For now, we believe the innovations inspired by these new computing paradigms can provide meaningful gains for chips manufactured with today’s process technology. However, in the years to come, and to continue progressing, we will need device-level advances. In the case of neuromorphic computing, denser memory technology and new materials with non-volatile plasticity dynamics will be needed. In the case of probabilistic computing, it will extend AI solutions to include novel and efficient implementations that enable calculations with probability distributions. For both neuromorphic and probabilistic computing, the ultimate efficiency gains will likely require devices and circuits that harness physical sources of noise to directly embody stochastic dynamics.

Are new devices shaping the types of computing Intel is focusing on, or is the type of computing shaping the drive for new devices?

It is an interplay in both directions. The evolution of device technology and Moore’s Law is making new architectures possible, and new architectural ideas are driving the requirements for future device technologies. But what’s really driving requirements for both is the exponential amount of new data that we are collecting out in the world. The collection, storage, and analysis of this data will require new models of computation and has the potential to create incredible new experiences for us all.

Is the future of AI spiking neural nets?

The spiking neural network (SNN) is the natural successor to the artificial neural networks used for deep learning today. By directly integrating temporal dynamics into their operation, SNNs are very well suited for processing real-world sensory data, such as sound or video, especially when fast responses and adaptation are needed. From an algorithmic perspective, spiking neurons provide a principled approach for building neural networks that process events in time, for example to support one-shot learning or to make decisions. From an implementation perspective, spikes allow neuromorphic architectures to exploit the highly sparse activity of these algorithms to deliver significant gains in energy efficiency. These advantages offer great value for edge devices, such as on the manufacturing floor, in autonomous vehicles, or for robotics—applications where unpredictable data needs to be processed and assimilated in real-time.

What do you think are likely to be the first practical applications of quantum and neuromorphic computing that most people will benefit from?

Quantum computing will solve problems that would take traditional computers months or years to solve, or that are completely intractable today. This could include issues such as drug development, financial modeling, and climate forecasting. For neuromorphic chips, the first applications will likely be those that require real-time customization of pre-trained functions, dependent on the unique environment of a particular device. For example, neuromorphic chips may enable speech-recognition systems to autonomously adapt to recognize users with strong accents, or to control robotic arms in dynamic environments.

Is there any concern that using these computing techniques will make it more difficult for us to understand why future computing systems make the decisions that they do? To what extent will decisions be explainable, and how can we improve that?

This is a valid concern and an active area of research, what you might call “Explainable AI.” For example, we would never recommend launching a device that can endanger human safety if its engineers cannot articulate how or why it came to the response that it did. We believe that probabilistic computing may offer some advantages in that it provides a framework for understanding potential error in an answer, which may be useful to higher-level policies that makes decisions about how a system ultimately engages the physical world.

What about how this technology is evolving keeps you awake at night?

As with any new technology there can be unintended consequences, as it is used for both good and bad. As an example, one potential application of quantum computing is to break widely-used cryptographic algorithms, putting sensitive data at risk. Although we haven’t reached that point yet, it’s not too early to begin developing new cryptography that will be robust in a post quantum-computing world. We should be similarly mindful of how advances in AI will change our relationship with data and how we make decisions, as well as when we delegate certain decisions to machines. The real challenge may be to retain awareness and to be intentional about those choices, as opposed to just letting them happen.

A version of this post appears in the April 2019 print magazine as “Intel Labs’ Rich Uhlig.”

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions