Intel Starts R&D Effort in Probabilistic Computing for AI

Seeks ways to help self-driving cars and autonomous robots deal with the uncertainty of the real world

4 min read
Robot fixing a sink
Photo: iStockphoto

Intel announced today that it is forming a strategic research alliance to take artificial intelligence to the next level. Autonomous systems don’t have good enough ways to respond to the uncertainties of the real world, and they don’t have a good enough way to understand how the uncertainties of their sensors should factor into the decisions they need to make. According to Intel CTO Mike Mayberry, the answer is “probabilistic computing,” which he says could be AI’s next wave.

IEEE Spectrum: What motivated this new research thrust?

Mike Mayberry: We’re trying to figure out what the next wave of AI is. The original wave of AI is based on logic and it’s based on writing down rules; it’s closest to what you’d call classical reasoning. The current wave of AI is around sensing and perception—using a convolutional neural net to scan an image and see if something of interest is there. Those two by themselves don’t add up to all the things that human beings do naturally as they navigate the world.

An example of this would be where you are startled by something—let’s say a car siren. You’d automatically be thinking of different scenarios that would be consistent with the data you have and you would also be conscious of the data you don’t have. You would be inferring a probability. Maybe the probability is figuring out whether the siren is coming from ahead of you or behind you. Or whether it is going to make you late for a meeting. You automatically do things that machines have trouble with. We run into those situations all the time in real life, because there’s always uncertainty around what is the current situation.

Currently AI and deep-learning systems have been described as brittle. What we mean by that is they are overconfident in their answer. They’ll say with 99 percent certainty that there something in a picture that it thinks it recognizes. But in many cases the probability is incorrect; confidence is not as high as [the AI] thinks it is.

So what we’d like to do in a general research thrust is figure out how to build probability into our reasoning systems and into our sensing systems. And there’s really two challenges in that. One is the problem of how you compute with probabilities and the other is how do you store memories or scenarios with probabilities.

So we’ve been doing a certain amount of internal work and with academia, and we’ve decided that there’s enough here that we’re going to kick off a research community. The goal is to have people share what they know about it, collaborate on it, figure out how you represent probability when you write software, and how you construct computer hardware. We think this will be... part of the third wave of AI. We don’t think we’re done there, we think there are other things as well, but this will be around probabilistic computing.

Spectrum: That term has been used to describe many things in the past that aren’t related to AI, such as stochastic computing and error-tolerant computing. What’s it really like?

Mayberry: We’re using [probabilistic computing] in a slightly different sense than before. For example, stochastic computing is about getting a good enough answer even with errors. Fuzzy logic is actually closer to the concept that we’re talking here, where you’re deliberately keeping track of uncertainties as you process information. There’s statistical computing too, which is really more of a software approach, where you’re keeping track of probabilities by building trees. So again, these are not necessarily new concepts. But we intend to apply them differently than has been done in the past.

Spectrum: Will this involve new kinds of devices?

Mayberry: We’re going to approach it initially by looking at algorithms. Our bias at Intel is to build hardware, but if we don’t really understand how the use model is going to evolve or how the algorithms are going to evolve, then we run the risk of committing to a path too early. So we’re initially going to have research thrusts around algorithms and software frameworks. There will be a piece that will be around what would hardware optimization look like if you got to that point. And, can these things be fooled? You have to think about security early on. Those are the things we’ll be approaching.

(Mayberry recommends a look at the work of Vikash Mansinghka, who leads the Probablistic Computing Project at MIT.)

Spectrum: How does this fit with Intel’s existing AI efforts?

Mayberry: This is intended to be part of a larger system that incorporates our existing work…. You don’t want your logic system to assume that your sensing is 100 percent accurate, but you don’t want the sensor to necessarily have false information about confidence either. So you have to build a system around how those two components talk to each other and keep track of that kind of information. So perhaps the sensing system reports, “I’ve just had a change in brightness”—therefore my answer is a little less confident than before.

Keeping track of that kind of information is part of what the system design will look like. We don’t know exactly how we’ll implement that from a software framework point of view.

Spectrum: What are some potential applications?

Mayberry: Certainly one of our targets is having better autonomous machines, whether they’re cars or household robots or something like that. We think [probabilistic computing] is an important part of making systems more robust. Systems that are highly constrained are less likely to need this kind of capability. The more that you put the system into an open environment, where there are more things to change, the more likely it is you’re going to need to supplement the systems we use today.

We are of course hopeful that this will turn into products within a few years time, but this is pre-roadmap. So we’re not committing to anything at this time.

Spectrum: Can you share any more about the time frame?

Mayberry: Proposals are expected on 25 May. And we’ll try to launch this activity this year. As I said we’d like to influence our roadmap in the next few years, but this is pre-roadmap, so we don’t have a specific product implementation date.

A correction to this article was made on 20 May 2020.

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}